It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
how to get rid of lots of function calls? Here is an example of the recursive function:
def factorial(n):
if n <= 1:
return 1
else:
return n * factorial(n - 1)
I heard you can do it with decorators easily but I don't know how to use them
Assuming you have looked over your algorithm already carefully and eliminated any redundant calls, you could try to rewrite your function in an iterative way (ie using loops rather than recursion).
Recursion can often express a solution to a problem in a nice way, but it is rather memory hungry (saving state on the stack repeatedly) and not that speedy due to all the function calls. I see its main benefit in its expressive power.
Memoization is another option, so rather than re-computing (calling a function over), you first look up to see if you've previously computed (and stored) a value already and use it instead.
You're looking for tail call optimization, which is basically a technique to convert recursive programs to iterative without rewriting them. For example, if you call your factorial function with n = 1000, Python fails complaining that "maximum recursion depth exceeded". However, when you rewrite the function to be tail-recursive:
def tail_factorial(n, result=1):
if n <= 1:
return result
else:
return fac(n - 1, result * n)
and then use a "trampoline" to call it:
def trampoline_factorial(n):
def fac(n, result=1):
if n <= 1:
return result
else:
return lambda: fac(n - 1, result * n)
f = fac(n)
while callable(f):
f = f()
return f
you can evaluate 1000! without any problems.
Tail-call optimization can be indeed automated in Python using decorators, see e.g. here
Related
This question already has an answer here:
Why is time complexity O(1) for pow(x,y) while it is O(n) for x**y?
(1 answer)
Closed 5 months ago.
I want to know the computational complexity of pow in Python. To two-arg (plain exponentiation).
I have this code, and I know the computational complexity of a for is O(n), but I don't know if the pow affect the complexity.
def function(alpha,beta,p):
for x in range(1,p):
beta2 = (pow(alpha, x)) % p
if beta == beta2:
return x
else:
print("no existe")
As mentioned by comment, the official Python interpreter does a lot of optimization for its internal math functions. The usual pow operation of type A ** B calls two variable Pointers for evaluation (actually all Python variables are a combination of Pointers, making it unnecessary to initialize data types), but this is a slow process.
On the contrary, the interpreter can optimize the data in the POW, fix their variable types as int , thus to reduce complexity.
You can also read this answer, which should fully explain your question
Why is time complexity O(1) for pow(x,y) while it is O(n) for x**y?
Oh now you post a code, that clearify the problem. Usually, in Algorithm we treat the time complexity of OPERATION as O(1), this dosn't matter how many operations you have in a loop, because that is the def of O() notation.
And for usual program, only the loop matters, for your progrom the complexity should only be O(n)
def function(alpha,beta,p):
for x in range(1,p): # Only one loop
beta2 = (pow(alpha, x)) % p
if beta == beta2:
return x
else:
print("no existe")
Why does Python create a new frame for each recursive function call in object oriented programming?
I have tried to search for answers on the internet but could not find any specific reason or justification for it. I
In some cases, Python could absolutely get away with reusing a stack frame for recursive function calls:
def factorial(n, a=1):
if n == 0:
return a
else:
return factorial(n - 1, n * a)
But often every call needs its own stack frame, since there's some state that's unique to each iteration. Let's say that instead of returning the values immediately, we wanted to print them out:
def factorial2(n, depth=0):
if n == 0:
value = 1
else:
value = n * factorial2(n-1, depth+1)
print(f"Depth: {depth}, Value: {value}")
return value
If we call factorial2(3), then by the time we're at the deepest function call, there are four different depth and value variables in different stack frames. Python needs to use these values later, so it can't throw away the stack frames in the meantime.
Languages like Scheme still create new stack frames for recursive functions in the general case, but they can avoid it in the special case of tail-call recursion. In the first factorial, the recursion is the very last thing that happens before the function returns, so a language like Scheme would know it could re-use the stack frame.
Python could implement this optimization, but Guido van Rossum has opposed it, arguing that it would make debugging harder and encourage non-Pythonic code. You can read these blog articles for his full thought process:
http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html
http://neopythonic.blogspot.com/2009/04/final-words-on-tail-calls.html
This question already has answers here:
Recursion or Iteration?
(31 answers)
Closed 5 years ago.
I am relatively new to python and have recently learned about recursion. When tasked to find the factorial of a number, I used this:
def factorial(n):
product = 1
for z in xrange(1,n+1):
product *= z
return product
if __name__ == "__main__":
n = int(raw_input().strip())
result = factorial(n)
print result
Then, because the task was to use recursion, I created a solution that used recursion:
def factorial(n):
if n == 1:
current = 1
else:
current = n*(factorial(n-1))
return current
if __name__ == "__main__":
n = int(raw_input().strip())
result = factorial(n)
print result
Both seem to produce the same result. My question is why would I ever use recursion, if I could use a for loop instead? Are there situations where I cannot just create for loops instead of using recursion?
For every solution that you found with recursion there are a solution iterative, because you can for example simulate the recursion using an stack.
The example of Factorial use a type of recursion named Tail Recursion an this cases have an easy way to implement iterative, but in this case recursion solution is more similar to the mathematical definition. However there are other problems that found an iterative solution is difficult and is more powerful and more expressive use recursive solution. For example the problem of Tower of Hanoi see this question for more informationTower of Hanoi: Recursive Algorithm, the solution of this problem iterative is very tedious and finally have to simulate a recursion.
There are problems like Fibonacci sequence that the definition is recursive an is easy to generate a solution recursive
def fibonacci(n):
if ((n==1) or (n==2)):
return 1
else (n>2):
return fibonacci(n-2) + fibonacci(n-1)
This solution is straight forward, but calculate many times unnecessarily the fibonacci of n-2 see the image bellow to better understanding the fibonacci(7)
So you can see the recursion like syntactic sugar some time, but depends of what you want, you need to decide if use or no. When you program in Low-level programming language the recursion is not used, when you program a microprocessor is a big error, but on others case is better use a recursive solutions for better understanding of your code.
hope this help, but you need go deep reading books.
This recursive factorial calculator runs fine all the way up to an input of 994 when I recieve this error: "RecursionError: maximum recursion depth exceeded in comparison". Can someone please explain what is meant by this? How can there be a maximum amount of recursions? Thanks in advance.
def factorial(x):
if( x == 0):
return 1
else:
return x * factorial(x - 1)
while True:
u_input = input("")
print(factorial(int(u_input)))
def calc_factorial(num):
num-=1
fact_total = 1
while num > 0:
fact_total *= num
num-=1
return(fact_total)
EDIT:
I understand that recursion is re-using a function from within that function as a loop but I do not understand what recursion depth is and would like that explained. I could not tell from the answers to the other question. Apologies for the confusion.
Recursive calls are just like any other function calls, and function calls use memory to keep track of the state inside each function. You will notice that you get a very long traceback showing all the nested function calls that are on the stack. Since memory is finite, recursion depth (the number of nested function calls) is inherently limited even without python's enforced limit.
The error means what it says: Python limits the depth of how many recursive calls you can make. The default is 1000, which was chosen as a number that means you most likely have infinite recursion somewhere. Since no computer can keep track of an infinite amount of recursive calls (and such a program would never finish anyway), halting with this error message is seen as preferable to recurring as deeply as the computer can handle, which eventually results in a stack overflow.
You can change this limit if you wish with sys.setrecursionlimit, but the best way to avoid this issue is to change your program to work iteratively instead of recursively. Fortunately, this is easy for a factorial calculation:
def factorial(x):
result = 1
for num in range(1, x+1):
result *= num
return result
There are built in Function with Math library, It is improved algorithm to get the value of factorial quickly, So when we are writing the recursive algorithm to get the value for factorial, there will be a recursive limit. So If we use the built in library, then we can escape from that problem.
import math
math.factorial(5)
Answer : 120
I am trying to code the following formula using recursion.
I was thinking of doing it in different ways, but since the expression is recursive, in my opinion recursion is the way to go.
I know how to apply recursion to simple problems, but in this particular case my understanding seems to be wrong. I tried to code it in python, but the code failed with the message
RuntimeError: maximum recursion depth exceeded
Therefore, I would like to ask what is the best way to code this expression and whether recursion is possible at all.
The python code I tried is:
def coeff(l,m,m0,m1):
if l==0 and m==0:
return 1.
elif l==1 and m==0:
return -(m1+m0)/(m1-m0)
elif l==1 and m==1 :
return 1./(m1-m0)
elif m<0 or m>l:
return 0.
else:
return -((l+1.)*(m1-m0))/((2.*l+1.)*(m1+m0))* \
((2.*l+1.)*(m+1.)/((l+1.)*(2.*m+3.)*(m1-m0))*coeff(l,m+1,m0,m1) +\
(2.*l+1.)*m/((l+1.)*(2.*m-1.)*(m1-m0))*coeff(l,m-1,m0,m1) -\
l/(l+1.)*coeff(l-1,m,m0,m1))
where x=m1-m0 and y=m1+m0. In my code I tried to express the a(l,m) coefficient as a function of the others and code the recursion on the basis of it.
A naive recursive implementation here, obviously recalculates the same things over and over. It probably pays to store things previously calculated. This can be done either by explicitly filling out a table, or implicitly by memoization (I therefore don't really agree with the comments taking about "recursion vs. dynamic programming").
E.g., using this decorator,
class memoize(dict):
def __init__(self, func):
self.func = func
def __call__(self, *args):
return self[args]
def __missing__(self, key):
result = self[key] = self.func(*key)
return result
You could write it as
#memoize
def calc_a(l, m, x, y):
if l == 0 and m == 0:
return 1
# Rest of formula goes here.
Note that the same link contains a version that caches between invocations.
There are a number of tradeoffs between (memoized) recursion and explicit table building:
Recursion is typically more limited in the number of invocations (which might or might not be an issue in your case - you seem to have an infinite recursion problem in your original code).
Memoized recursion is (arguably) simpler to implement than explicit table building with a loop.