Multi-recursive functions - python

I’d like to be pointed toward a reference that could better explain recursion when a function employs multiple recursive calls. I think I get how Python handles memory when a function employs a single instance of recursion. I can use print statements to track where the data is at any given point while the function processes the data. I can then walk each of those steps back to see how the resultant return value was achieved.
Once multiple instances of recursion are firing off during a single function call I am no longer sure how the data is actually being processed. The previously illuminating method of well-placed print statements reveals a process that looks quantum, or at least more like voodoo.
To illustrate my quandary here are two basic examples: the Fibonacci and Hanoi towers problems.
def getFib(n):
if n == 1 or n == 2:
return 1
return getFib(n-1) + getFib(n-2)
The Fibonacci example features two inline calls. Is getFib(n-1) resolved all the way through the stack first, then getFib(n-2) resolved similarly, each of the resultants being put into new stacks, and those stacks added together line by line, with those sums being totaled for the result?
def hanoi(n, s, t, b):
assert n > 0
if n ==1:
print 'move ', s, ' to ', t
else:
hanoi(n-1,s,b,t)
hanoi(1,s,t,b)
hanoi(n-1,b,t,s)
Hanoi presents a different problem, in that the function calls are in successive lines. When the function gets to the first call, does it resolve it to n=1, then move to the second call which is already n=1, then to the third until n=1?
Again, just looking for reference material that can help me get smart on what’s going on under the hood here. I’m sure it’s likely a bit much to explain in this setting.

http://www.pythontutor.com/visualize.html
There's even a Hanoi link there so you can follow the flow of code.
This is a link to the hanoi code that they show on their site, but it may have to be adapated to visualize your exact code.
http://www.pythontutor.com/visualize.html#code=%23+move+a+stack+of+n+disks+from+stack+a+to+stack+b,%0A%23+using+tmp+as+a+temporary+stack%0Adef+TowerOfHanoi(n,+a,+b,+tmp)%3A%0A++++if+n+%3D%3D+1%3A%0A++++++++b.append(a.pop())%0A++++else%3A%0A++++++++TowerOfHanoi(n-1,+a,+tmp,+b)%0A++++++++b.append(a.pop())%0A++++++++TowerOfHanoi(n-1,+tmp,+b,+a)%0A++++++++%0Astack1+%3D+%5B4,3,2,1%5D%0Astack2+%3D+%5B%5D%0Astack3+%3D+%5B%5D%0A++++++%0A%23+transfer+stack1+to+stack3+using+Tower+of+Hanoi+rules%0ATowerOfHanoi(len(stack1),+stack1,+stack3,+stack2)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=2&curInstr=0

Related

What means when a return line in Python includes the actual name of own function? [duplicate]

The code I already have is for a bot that receives a mathematical expression and calculates it. Right now I have it doing multiply, divide, subtract and add. The problem though is I want to build support for parentheses and parentheses inside parentheses. For that to happen, I need to run the code I wrote for the expressions without parentheses for the expression inside the parentheses first. I was going to check for "(" and append the expression inside it to a list until it reaches a ")" unless it reaches another "(" first in which case I would create a list inside a list. I would subtract, multiply and divide and then the numbers that are left I just add together.
So is it possible to call a definition/function from within itself?
Yes, this is a fundamental programming technique called recursion, and it is often used in exactly the kind of parsing scenarios you describe.
Just be sure you have a base case, so that the recursion ends when you reach the bottom layer and you don't end up calling yourself infinitely.
(Also note the easter egg when you Google recursion: "Did you mean recursion?")
Yes, as #Daniel Roseman said, this is a fundamental programming technique called recursion.
Recursion should be used instead of iteration when you want to produce a cleaner solution compared to an iterative version. However, recursion is generally more expensive than iteration because it requires winding, or pushing new stack frames onto the call stack each time a recursive function is invoked -- these operations take up time and stack space, which can lead to an error called stack overflow if the stack frames consume all of the memory allocated for the call stack.
Here is an example of it in Python
def recur_factorial(n):
"""Function to return the factorial of a number using recursion"""
if n == 1:
return n
else:
return n*recur_factorial(n-1)
For more detail, visit the github gist that was used for this answer
yes it's possible in "python recursion"
and the best describe is: "A physical world example would be to place two parallel mirrors facing each other. Any object in between them would be reflected recursively"

Python recursion creates new frame?

Why does Python create a new frame for each recursive function call in object oriented programming?
I have tried to search for answers on the internet but could not find any specific reason or justification for it. I
In some cases, Python could absolutely get away with reusing a stack frame for recursive function calls:
def factorial(n, a=1):
if n == 0:
return a
else:
return factorial(n - 1, n * a)
But often every call needs its own stack frame, since there's some state that's unique to each iteration. Let's say that instead of returning the values immediately, we wanted to print them out:
def factorial2(n, depth=0):
if n == 0:
value = 1
else:
value = n * factorial2(n-1, depth+1)
print(f"Depth: {depth}, Value: {value}")
return value
If we call factorial2(3), then by the time we're at the deepest function call, there are four different depth and value variables in different stack frames. Python needs to use these values later, so it can't throw away the stack frames in the meantime.
Languages like Scheme still create new stack frames for recursive functions in the general case, but they can avoid it in the special case of tail-call recursion. In the first factorial, the recursion is the very last thing that happens before the function returns, so a language like Scheme would know it could re-use the stack frame.
Python could implement this optimization, but Guido van Rossum has opposed it, arguing that it would make debugging harder and encourage non-Pythonic code. You can read these blog articles for his full thought process:
http://neopythonic.blogspot.com/2009/04/tail-recursion-elimination.html
http://neopythonic.blogspot.com/2009/04/final-words-on-tail-calls.html

quicksort algorithm as WIKIPEDIA says in python

In Wikipedia, there is a pseudo-code of quicksort algorithm:
So I tried to implement in python, but it does not sort anything.
def partition(A,lo,hi):
pivot = A[hi]
i=lo
#Swap
for j in range(lo,len(A)-1):
if (A[j] <= pivot):
val=A[i]
A[i]=A[j]
A[j]=val
i=i+1
val=A[i]
A[i]=A[hi]
A[hi]=val
return(A,i)
def quicksort(A,lo,hi):
if (lo<hi):
[A,p]=partition(A,lo,hi)
quicksort(A,lo,p-1)
quicksort(A,p+1,hi)
A=[5,3,2,6,8,9,1]
A=quicksort(A, 0, len(A)-1)
print(A)
ORIGINAL: It does not throw an error so I do not no where I made a mistake.
UPDATE: It now goes into infinite recursion.
It doesn't throw an error or print anything because there is no main program to run anything. You indented what should be the main program, so that is now part of the quicksort function.
Also, this code does throw an error, because you left a comment in what you posted. I'll clean up the code and edit your posting.
I corrected several code errors:
Removed "enter code here" text, which caused an obvious compilation error.
Corrected indentation, so that the last three lines are now your main program.
Corrected the main-program call: quicksort takes the bounds (subscripts) of the array, but you were passing in the array elements themselves.
That fixes your given problem. You now have infinite recursion due to not handling the return values properly. Also, your main program destroys the sorted array, since quicksort doesn't return anything. The final print statement will give None as its result.
You haven't quite implemented the given algorithm. The most important problem is the for loop's upper limit.
Python loops do not include the end value. Your given loop will run j through the values lo through len(A)-2, so you'll never treat the last value of the list.
The upper limit given in Wikipedia is hi, not the list end.
Fix those, and you'll be close to a solution.
Also, I strongly recommend that you stick in a couple of tracing print statements, so you can see follow how the program works. For instance, as the first statement of each function, print the function name and the input parameters.
Why do you return A from the function? The Wikipedia algorithm doesn't do that, and it's not necessary: you altered the list in place.
Since you already know about multiple assignment, note that there's an easier way to swap two values:
a, b = b, a
Does this get you moving through the current problems, enough to get you back on the learning track you want?

How do Python Recursive Generators work?

In a Python tutorial, I've learned that
Like functions, generators can be recursively programmed. The following
example is a generator to create all the permutations of a given list of items.
def permutations(items):
n = len(items)
if n==0: yield []
else:
for i in range(len(items)):
for cc in permutations(items[:i]+items[i+1:]):
yield [items[i]]+cc
for p in permutations(['r','e','d']): print(''.join(p))
for p in permutations(list("game")): print(''.join(p) + ", ", end="")
I cannot figure out how it generates the results. The recursive things and 'yield' really confused me. Could someone explain the whole process clearly?
There are 2 parts to this --- recursion and generator. Here's the non-generator version that just uses recursion:
def permutations2(items):
n = len(items)
if n==0: return [[]]
else:
l = []
for i in range(len(items)):
for cc in permutations2(items[:i]+items[i+1:]):
l.append([items[i]]+cc)
return l
l.append([item[i]]+cc) roughly translates to the permutation of these items include an entry where item[i] is the first item, and permutation of the rest of the items.
The generator part yield one of the permutations instead of return the entire list of permutations.
When you call a function that returns, it disappears after having produced its result.
When you ask a generator for its next element, it produces it (yields it), and pauses -- yields (the control back) to you. When asked again for the next element, it will resume its operations, and run normally until hitting a yield statement. Then it will again produce a value and pause.
Thus calling a generator with some argument causes creation of actual memory entity, an object, capable of running, remembering its state and arguments, and producing values when asked.
Different calls to the same generator produce different actual objects in memory. The definition is a recipe for the creation of that object. After the recipe is defined, when it is called it can call any other recipe it needs -- or the same one -- to create new memory objects it needs, to produce the values for it.
This is a general answer, not Python-specific.
Thanks for the answers. It really helps me to clear my mind and now I want to share some useful resources about recursion and generator I found on the internet, which is also very friendly to the beginners.
To understand generator in python. The link below is really readable and easy to understand.
What does the "yield" keyword do in Python?
To understand recursion, "https://www.youtube.com/watch?v=MyzFdthuUcA". This youtube video gives a "patented" 4 steps method to writing any recursive method/function. That is very clear and practicable. The channel also has several videos to show people how does the recursion works and how to trace it.
I hope it can help someone like me.

Very simple python functions takes spends long time in function and not subfunctions

I have spent many hours trying to figure what is going on here.
The function 'grad_logp' in the code below is called many times in my program, and cProfile and runsnakerun the visualize the results reveals that the function grad_logp spends about .00004s 'locally' every call not in any functions it calls and the function 'n' spends about .00006s locally every call. Together these two times make up about 30% of program time that I care about. It doesn't seem like this is function overhead as other python functions spend far less time 'locally' and merging 'grad_logp' and 'n' does not make my program faster, but the operations that these two functions do seem rather trivial. Does anyone have any suggestions on what might be happening?
Have I done something obviously inefficient? Am I misunderstanding how cProfile works?
def grad_logp(self, variable, calculation_set ):
p = params(self.p,self.parents)
return self.n(variable, self.p)
def n (self, variable, p ):
gradient = self.gg(variable, p)
return np.reshape(gradient, np.shape(variable.value))
def gg(self, variable, p):
if variable is self:
gradient = self._grad_logps['x']( x = self.value, **p)
else:
gradient = __builtin__.sum([self._pgradient(variable, parameter, value, p) for parameter, value in self.parents.iteritems()])
return gradient
Functions coded in C are not instrumented by profiling; so, for example, any time spent in sum (which you're spelling __builtin__.sum) will be charged to its caller. Not sure what np.reshape is, but if it's numpy.reshape, the same applies there.
Your "many hours" might be better spent making your code less like a maze of twisty little passages and also documenting it.
The first method's arg calculation_set is NOT USED.
Then it does p = params(self.p,self.parents) but that p is NOT USED.
variable is self???
__builtin__.sum???
Get it firstly understandable, secondly correct. Then and only then, worry about the speed.

Categories