How does Python stops itself a recurence? - python

Imagine I calculate the Fibonacci sequence following the (obviously inefficient) recursive algorithm :
def Fibo(n):
if n <= 1:
return(n)
else:
return(Fibo(n-2) + Fibo(n-1))
then my question is : how does Python known it has to stop the recurrence at n=0 ?
After all, if I call Fibo(-12), Python obviously answers -12, so why does it stop the recursion at n=0 while calling Fibo(12) for instance ?
Edit after a few comments :
This question has nothing to do with the mathematical concept of recurrence. I know recurrence stops at the initialized point. I would like to understand how recurrence are implemented in computer. For me ti is absolutely not clear while a computer should stop while there is no explicit stop command. What prevents here Fibo(0)=Fibo(-1)+Fibo(-2) to continue endlessly ? Because after all I precised that Fibo(-1)=-1, Fibo(-2)=-2, ... and I might want to summ all negative numbers as well ...
I confess in the last case I would prefer do a while loop.

It's functional, so it doesn't run, so it also doesn't stop. You are (still) thinking in terms of iterative programming and assume some kind of loop here which needs to stop at some time. This is not the case.
Instead in this paradigm you just state that the return value is the sum of the prior two numbers. At this point you don't care how the prior numbers are produced, here you just assume that they already exist.
Of course they don't and you will have to compute them as well but still this is not a loop which needs a stop. Instead it is a recursion which has an anchor. With each recursion step the values will become smaller and smaller and once they reach a value below 2 you just return 0 or 1 without any further recursion. This is your anchor.
Feel free to think of it as a "stopping point" but be aware that there is no loop happening which you need to break out of or similar.

Related

Time to excecute function goes up instead of down instead of up as x increases - Range()

My bad if this is a bit nooby or I don't understand how this works. I'm trying to get the time of range(1,x) using the code below.
Code
import timeit
def main(x):
return range(1,x)
def timeThem(x):
start = timeit.default_timer()
main(x)
stop = timeit.default_timer()
return stop - start
for i in range(5):
print(timeThem(i))
Now I would expect since x is getting larger in range(1,x) the time it would take to execute this would be longer. What i'd guess it would look something like this.
Expected Output
.01 .02 .03 .04 .05
But no, my time output gets shorter for some reason. As shown below I get something totally different than what I had imagined.
Received Output
8.219999999975469e-07
6.740000000060586e-07
1.0670000000004287e-06
4.939999999967193e-07
4.420000000032731e-07
What am I doing wrong here? Or do I just not understand how this really works?
From the range documentation:
Return an object that produces a sequence of integers from start (inclusive) to stop (exclusive) by step.
range do not produce the actual sequence so it can run in constant time. Note that iterating over the results is done in linear time.
Furthermore, your values are too small to see any significant difference in the timing even if range would run in linear time. Consequently, you are measuring noise.
your main function only returns a generator
def main(x):
return range(1,x)
Basically, a generator is not executed right away but an iterator with two values and no evaluation of it yet. So it does not matter whether you give x=1, x=100 or x=1000000. From a performance view Its basically returns a tuple like
def main(x):
return (1,x)
This is due to the nature of a generator that it's just get evaluated if you iterate over it. E.g. list(range(0, <infity>) ) would brake your memory but for i in range(0,<infity>): print(i) would just take forever to compute.
So range(x, 1000 ) did just create one object - it did not evaluate it
please be aware that python has some other coding standards than other languages like java or javascript where timeThem is a proper name, but in python, we follow pep8 that says one should use snake-case like time_them.
Personally, I would recommend you to use something like time_function to be even more explicit about what your function is supposed to do.

I'm trying to find happy numbers but my program goes into an endless loop

I've done everything in my ability to try to do this without any functions and only while loops (as said by my teacher) please could you find out why, I even tried dry running but it should still work
p.s it doesn't work in either ways of using:
while c!=0 and f<50: or while c!0 or f<50:
happy_num=1
x= [0]*30
f=0
#f is a safety measure so that the program has a stop and doesnt go out of control
happynumbers=" "
number=int(input("input number "))
while happy_num!=31:
c=0
happy=number
while c!=1 or f<50:
integer=number
f=f+1
if integer<10:
a=number
b=0
d=0
elif integer<100:
a=number // 10
b=number % 10
d=0
else:
a=number // 10
bee=number % 100
b=bee // 10
d=number % 10
number=(a*a)+(b*b)+(d*d)
c=number
x[happy_num-1]=happy
if c==1:
happy_num+=1
elif f>49:
print("too many iterations program shutting down")
exit()
number=happy+1
print ("the happy numbers are: ", happynumbers)
Firstly, you say:
It doesn't work in either ways of using: while c!=1 and f<50: or while c!=1 or f<50
You should use the one with and (while c!=1 and f<50:), because otherwise it is useless as a failsafe. Right now your program gets stuck anyways so it might seem to you to not make a difference, I understand that. It's important in general that a failsafe is added with and, and not with or (because true or true == true and true or false == true, so when your loop is infinite, the f<50 failsafe will not make any difference to the truth value of the guard).
Adding print statements to your program, you can see that at around f=30 the program starts to become very slow, but this is not because of some infinite loop or anything; it's because the computations start to become very big at the line:
number=(a*a)+(b*b)+(d*d)
So f never actually reaches 50, because the program gets stuck trying to perform enormous multiplications. So much for your failsafe :/
I am not sure what algorithm you are using to find these happy numbers, but my guess it that there is something lacking in your guard of the inner loop, or some break statement missing. Do you account for the situation that the number isn't a happy number? Somehow you should also exit the loop in that situation. If you have some more background to what the program is supposed to do, that would be very helpful.
Edit: judging from the programming example on the Wikipedia page of happy numbers, you need some way to keep track of which numbers you've already 'visited' during your computations; that is the only way to know that some number is not a happy number.
plus the question i was given was to find the first 30 happy numbers, so the input would 1

How do I stop my backtracking algorithm once I find and answer?

I have wrote this code for solving a problem given to me in class, the task was to solve the "toads and frogs problem" using backtracking. My code solves this problem but does not stop once it reaches the solution (it keeps printing "states" showing other paths that are not a solution to the problem), is there a way to do this?. This is the code:
def solution_recursive(frogs):
#Prints the state of the problem (example: "L L L L _ R R R R" in the starting case
#when all the "left" frogs are on the left side and all the "right" frogs are on
#the right side)
show_frogs(frogs)
#If the solution is found, return the list of frogs that contains the right order
if frogs == ["R","R","R","R","E","L","L","L","L"]:
return(frogs)
#If the solution isn't the actual state, then start (or continue) recursion
else:
#S_prime contains possible solutions to the problem a.k.a. "moves"
S_prime = possible_movements(frogs)
#while S_prime contains solutions, do the following
while len(S_prime) > 0:
s = S_prime[0]
S_prime.pop(0)
#Start again with solution s
solution_recursive(s)
Thanks in advancement!
How do I stop my backtracking algorithm once I find an answer?
You could use Python exception facilities for such a purpose.
You could also adopt the convention that your solution_recursive returns a boolean telling to stop the backtrack.
It is also a matter of taste or of opinion.
I'd like to expand a bit on your recursive code.
One of your problems is that your program displays paths that are not the solutions. This is because each call to solution_recursive starts with
show_frogs(frogs)
regardless of whether frogs is the solution or not.
Then, you say that the program is continuing even after the solution has been found. There are two reasons for this, the first being that your while loop doesn't care about whether the solution has been found or not, it will go through all the possible moves:
while len(S_prime) > 0:
And the other reason is that you are reinitializing S_prime every time this function is called. I'm actually quite amazed that it didn't enter an infinite loop just checking the first move over and over again.
Since this is a problem in class, I won't give you an exact solution but these problems need to be resolved and I'm sure that your teaching material can guide you.

Python performance questions for: toggling +/-1, set instantiation, set membership check

I've been working on the following code which sort of maximizes the number of unique (in lowest common denominator) p by q blocks with some constraints. It is working perfectly. For small inputs. E.g. input 50000, output 1898.
I need to run it on numbers greater than 10^18, and while I have a different solution that gets the job done, this particular version gets super slow (made my desktop reboot at one point), and this is what my question is about.
I'm trying to figure out what is causing the slowdown in the following code, and to figure out in what order of magnitude they are slow.
The candidates for slowness:
1) the (-1)**(i+1) term? Does Python do this efficiently, or is it literally multiplying out -1 by itself a ton of times?
[EDIT: still looking for how operation.__pow__ works, but having tested setting j=-j: this is faster.]
2) set instantiation/size? Is the set getting too large? Obviously this would impact membership check if the set can't get built.
3) set membership check? This indicates O(1) behavior, although I suppose the constant continues to change.
Thanks in advance for insight into these processes.
import math
import time
a=10**18
ti=time.time()
setfrac=set([1])
x=1
y=1
k=2
while True:
k+=1
t=0
for i in xrange(1,k):
mo = math.ceil(k/2.0)+((-1)**(i+1))*(math.floor(i/2.0)
if (mo/(k-mo) not in setfrac) and (x+(k-mo) <= a and y+mo <= a):
setfrac.add(mo/(k-mo))
x+=k-mo
y+=mo
t+=1
if t==0:
break
print len(setfrac)+1
print x
print y
to=time.time()-ti
print to

How to make a recursive program run for a long time without getting RunTimeError in Python

This code is the recursive factorial function.
The problem is that if I want to calculate a very large number, it generates this error:
RuntimeError : maximum recursion depth exceeded
import time
def factorial (n) :
if n == 0:
return 1
else:
return n * (factorial (n -1 ) )
print " The factorial of the number is: " , factorial (1500)
time.sleep (3600)
The goal is to do with the recursive function is a factor which can calculate maximum one hour.
This is a really bad idea. Python is not at all well-suited for recursing that many times. I'd strongly recommend you switch this to a loop which checks a timer and stops when it reaches the limit.
But, if you're seriously interested in increasing the recursion limit in cython (the default depth is 1000), there's a sys setting for that, sys.setrecursionlimit. Note as it says in the documentation that "the highest possible limit is platform-dependent" - meaning there's no way to know when your program will fail. Nor is there any way you, I or cython could ever tell whether your program will recurse for something as irrelevant to the actual execution of your code as "an hour." (Just for fun, I tried this with a method that passes an int counting how many times its already recursed, and I got to 9755 before IDLE totally restarted itself.)
Here's an example of a way I think you should do this:
# be sure to import time
start_time = time.time()
counter = 1
# will execute for an hour
while time.time() < start_time + 3600:
factorial(counter) # presumably you'd want to do something with the return value here
counter += 1
You should also keep in mind that regardless of whether you use iteration or recursion, (unless you're using a separate thread) you're still going to be blocking the entire program for the entirety of the hour.
Don't do that. There is an upper limit on how deep your recursion can get. Instead, do something like this:
def factorial(n):
result = 1
for i in range(1, n+1):
result *= i
return result
Any recursive function can be rewritten to an iterative function. If your code is fancier than this, show us the actual code and we'll help you rewrite it.
Few things to note here:
You can increase recursion stack with:
import sys
sys.setrecursionlimit(someNumber) # may be 20000 or bigger.
Which will basically just increase your limit for recursion. Note that in order for it to run one hour, this number should be so unreasonably big, that it is mostly impossible. This is one of the problems with recursion and this is why people think about iterative programs.
So basically what you want is practically impossible and you would rather make it with a loop/while approach.
Moreover your sleep function does not do what you want. Sleep just forces you to wait additional time (frozing your program)
It is a guard against a stack overflow. You can change the recursion limit with sys.setrecursionlimit(newLimit)where newLimit is an integer.
Python isn't a functional language. Rewriting the algorithm iteratively, if possible, is generally a better idea.

Categories