python - Fibonacci Computing Time Difference - python

I wrote the following two codes for computing an element of the Fibonacci Sequence.
def fib(n):
zero, one = 0, 1
k = 1
while k < n:
zero, one = one, zero + one
k = k + 1
return one, ls
def fib2(n, memo=None):
if memo is None:
memo = {}
if n == 1 or n == 2:
return 1
if n in memo:
return memo[n]
else:
memo[n-1] = fib2(n-1, memo)
memo[n-2] = fib2(n-2, memo)
return memo[n-1] + memo[n-2]
##import timeit
##
##print('Fibonacci 1:', timeit.timeit('fib(10000)', '''def fib(n):
## zero, one = 0, 1
## k = 1
## while k < n:
## zero, one = one, zero + one
## k = k + 1
## return one''', number=100))
##
##print('Fibonacci 2:', timeit.timeit('fib2(10000)', '''import sys; sys.setrecursionlimit(10001);
##def fib2(n, memo=None):
## if memo is None:
## memo = {}
## if n == 0 or n == 1:
## return 1
## if n in memo:
## return memo[n]
## else:
## memo[n-1] = fib2(n-1, memo)
## memo[n-2] = fib2(n-2, memo)
## return memo[n-1] + memo[n-2]''', number=100))
I am using a simple while loop in fib and fib2 is a recursive implementation of the same. But it turns out that fib2 is exceptionally slower. I want to know why it is. Is it because fib2 creates a whole lot of frames? Have I implemented fib2 correctly?
Thanks.

Time this streamlined recursive version against your original iterative solution -- first up the recursion limit by ~ 1% to 10%:
def fib2(n, memo={0: None, 1: 1, 2: 1}):
if n in memo:
return memo[n]
previous = fib2(n - 1) # implicitly computes fib2(n - 2)
result = memo[n] = previous + memo[n - 2]
return result
I'm not passing memo as an argument on the recursion as I'm taking advantage of the "problem" when default arguments are set to structures that can be modified.
The above solution is ~ 4.5x slower than the original iterative one on my machine on the first invocation -- after that, memoization takes over. We can improve on this a little bit, in both space & time, by changing our "memory" from a dictionary to a list since all the keys are sequential integers:
def fib3(n, memo=[None, 1, 1]):
if n < len(memo):
return memo[n]
previous = fib3(n - 1) # implicitly computes fib3(n - 2)
result = previous + memo[-2]
memo.append(result)
return result
This one times out at ~ 3x slower than the iterative solution, on my machine, for the first invocation. However, we can do better speed-wise using recursion:
def fib4(n, res=0, nxt=1):
if n == 0:
return res
return fib4(n - 1, nxt, res + nxt)
This is only ~ 2x slower than the iterative solution and/but has no memoization. In a language with tail call optimization (i.e. not Python), this would likely become/tie iteration.

Related

Jump Game II Leetcode, why is my memoization failing?

Here is the problem:
Jump Game II
Given an array of non-negative integers nums, you are initially positioned at the first index of the array.
Each element in the array represents your maximum jump length at that
position.
Your goal is to reach the last index in the minimum number of jumps.
You can assume that you can always reach the last index.
class Solution:
def jump(self, nums: List[int]) -> int:
memo = {}
result = self.helper(0, nums, 0, memo)
return result
def helper(self, numJump, nums, currentInd, memo):
if currentInd in memo:
return memo[currentInd]
if currentInd == len(nums) - 1:
return numJump
val = nums[currentInd]
totalMin = float("inf")
for ind in range(1, val + 1):
newInd = currentInd + ind
if newInd >= len(nums):
continue
ret = self.helper(numJump + 1, nums, newInd, memo)
if ret < totalMin:
totalMin = ret
if currentInd not in memo:
memo[currentInd] = totalMin
return totalMin
My solution works without my cache. But as soon as I add it, I get incorrect input.
Here is an example:
input = [1,2,1,1,1]
expected output = 3
actual output = 4
The problem is that you memoize the total steps from the beginning to the current index before all alternatives have been considered. After finding a first path to the end, we know that these distances might not be optimal yet, but still you store them in memo. As you then take an alternative route to get to the same point, you trust that memo to give you the optimal distance -- which is a wrong assumption.
The right way to memoize, is to store the number of steps ahead, as if the current index is the starting point. As this value is only stored when all alternatives starting at that index have been considered, this is an optimal value.
As you backtrack, add 1 step to what you get from the recursive call.
Here is your code adapted with that approach:
def helper(self, nums, currentInd, memo):
if currentInd in memo:
return memo[currentInd]
if currentInd == len(nums) - 1:
return 0 # No more steps needed
val = nums[currentInd]
totalMin = float("inf")
for ind in range(1, val + 1):
newInd = currentInd + ind
if newInd >= len(nums):
break # No need to continue...
ret = self.helper(nums, newInd, memo)
if ret < totalMin:
totalMin = ret
memo[currentInd] = 1 + totalMin # Add step taken
return memo[currentInd]

Subset sum (dynamic programming) in Python - complexity problem

I have a problem with some implementation of a function that solves the problem of a subset sum in Python.
We have dynamic programming here, so the complexity should be polynomial.
The problem is that if the size of set grows linearly and the size of the numbers also increases linearly (of course it is not a logarithm of numbers) then the code execution time can grow exponentially.
My guess is that this may be due to a particular implementation.
Is it possible to improve it somehow?
Code in Python:
def subsetsum(array,num):
if num == 0 or num < 1:
return None
elif len(array) == 0:
return None
else:
if array[0] == num:
return [array[0]]
else:
with_v = subsetsum(array[1:],(num - array[0]))
if with_v:
return [array[0]] + with_v
else:
return subsetsum(array[1:],num)
You're using slices to pass suffixes of array, this will make a copy which has linear runtime. To avoid that you can pass indices instead.
Another advantage is that indices are hashable, so you can cache (or memoize) and avoid recomputing answers:
from functools import lru_cache
def ssum(array, N):
#lru_cache(maxsize=None)
def subsetsum(idx, num):
if num < 1 or idx >= len(array):
return frozenset()
if array[idx] == num:
return frozenset([idx])
with_v = subsetsum(idx + 1, num - array[idx])
if with_v:
return with_v | frozenset([idx])
else:
return subsetsum(idx + 1, num)
return list(array[i] for i in subsetsum(0, N))
>>> ssum([1,1,2], 4)
[1, 1, 2]
Unfortunately, there's still the cost of copying the answer obtained from the suffix

Fibonacci sequence calculator python

Hi I'm fairly new to python and trying to create a Fibonacci calculator function that prints all values up to a given number, if the number entered is not in the sequence then it adds the next Fibonacci number to the list. For example, if 10 is entered it should return [0, 1, 1, 2, 3, 5, 8, 13]. The function has to be recursive. Here is my current code:
def fibonacci(n):
n = int(n)
# The nested sub_fib function computes the Fibonacci sequence
def sub_fib(n):
if n < 2:
return n
else:
return (sub_fib(n-1) + sub_fib(n-2))
#This aspect of the main fib function applies the condition if the number
# input is not in the sequence then it returns the next value up
fib_seq= [sub_fib(i) for i in range(0,n) if sub_fib(i)<=n]
if fib_seq[-1] < n:
fib_seq.append(fib_seq[-1] + fib_seq[-2])
return fib_seq
else:
return fib_seq
print(fibonacci(input("Input a number to print sequence up to: ")))
I've managed to get it to work but it is incredibly slow (I assume due to the recursion) is there anyway I can speed it up without massively changing the program?
The two main reasons why your program is slow:
you calculate each Fibonacci number separately, you do not reuse the effort you have invested in finding the previous number;
you calculate the first n Fibonacci numbers, but from the moment the condition fails, you can stop.
You can change the program to still be recursive, but reuse the work to compute the previous number, and stop from the moment you have constructed the list.
You simply have to use the following function:
def fibon(a,b,n,result):
c = a+b
result.append(c)
if c < n:
fibon(b,c,n,result)
return result
and we initialize it with: fibon(0,1,n,[]). In each iteration, it will calculate the next Fibonacci number c = a+b and append it to the result. In case that number is still smaller than c < n then we need to calculate the next number and thus perform the recursive call.
def fibonacci(n):
n = int(n)
def fibon(a,b,n,result):
c = a+b
result.append(c)
if c < n:
fibon(b,c,n,result)
return result
return fibon(0,1,n,[])
print(fibonacci(input("Input a number to print sequence up to: ")))
This uses recursion but much faster than naive recursive implementations
def fib(n):
if n == 1:
return [1]
elif n == 2:
return [1, 1]
else:
sub = fib(n - 1)
return sub + [sub[-1] + sub[-2]]
Here are some examples of how you can improve the speed:
"""
Performance calculation for recursion, memoization, tabulation and generator
fib took: 27.052446
mem_fib took: 0.000134
tabular_fib took: 0.000175
yield_fib took: 0.000033
"""
from timeit import timeit
LOOKUP_SIZE = 100
number = 30
lookup = [None] * LOOKUP_SIZE
def fib(n):
return 1 if n <= 2 else fib(n - 1) + fib(n - 2)
def mem_fib(n):
"""Using memoization."""
if n <= 2:
return 1
if lookup[n] is None:
lookup[n] = mem_fib(n - 1) + mem_fib(n - 2)
return lookup[n]
def tabular_fib(n):
"""Using Tabulation."""
results = [1, 1]
for i in range(2, n):
results.append(results[i - 1] + results[i - 2])
return results[-1]
def yield_fib(n):
"""Using generator."""
a = b = 1
yield a
yield b
while n > 2:
n -= 1
a, b = b, a + b
yield b
for f in [fib, mem_fib, tabular_fib, yield_fib]:
t = timeit(stmt=f"f({number})", number=10, globals=globals())
print(f"{f.__name__} took: {t:.6f}")

Python Pure recursion - Divisor - One input

What is the recursive call (or inductive steps) for a function that returns the number of integers from 1 to N, which evenly divide N. The idea is to concieve a pure recursive code in python for this function. No 'for' or 'while' loops, neither modules can be used. The function num_of_divisors(42) returns 8, representing 1, 2, 3, 6, 7, 14, 21, and 42 as divisors of 42.
def num_of_divisors(n):
return sum(1 if n % i==0 else 0 for i in range(((n+1)**0.5)//1)
Good luck explaining it to your teacher!
If you really can't use for loops (?????????) then this is impossible without simulating one.
def stupid_num_of_divisors_assigned_by_shortsighted_teacher(n, loop_num=1):
"""I had to copy this from Stack Overflow because it's such an
inane restriction it's actually harmful to learning the language
"""
if loop_num <= (n+1) ** 0.5:
if n % loop_num == 0:
return 2 + \
stupid_num_of_divisors_assigned_by_shortsighted_teacher(n, loop_num+1)
else:
return stupid_num_of_divisors_assigned_by_shortsighted_teacher(n, loop_num+1)
else:
if n % loop_num == 0:
return 1
Bonus points: explain why you're adding 2 in the first conditional, but only 1 in the second conditional!
Here you go buddy your teacher'll be happy.
def _num_of_divisors(n, k):
if (k == 0):
return 0
return _num_of_divisors(n, k-1) + (n % k == 0)
def num_of_divisors(n):
return _num_of_divisors(n, n)
It's easier than you think to convert such a simple problem from a loop to a recursive function.
Start with a loop implementation:
n = 42
result = []
for i in range(n+1):
if n % i == 0:
result.append(i)
then write a function
def num_of_divisors_helper(i, n, result):
if <condition when a number should be added to result>:
result.append(n)
# Termination condition
if <when should it stop>:
return
# Recursion
num_of_divisors_helper(i+1, n, result)
Then you define a wrapper function num_of_divisors that calls num_of_divisors_helper. You should be able to fill the gaps in the recursive function and write the wrapper function yourself.
It's a simple, inefficient solution, but it matches your terms.
Without using %
def is_divisible(n, i, k):
if k > n:
return False
if n - i*k == 0:
return True
else:
return is_divisible(n, i, k+1)
def num_of_divisors(n, i=1):
if i > n/2:
return 1
if is_divisible(n, i, 1):
return 1 + num_of_divisors(n, i+1)
else:
return num_of_divisors(n, i+1)
num_of_divisors(42) -> 8
def n_divisors(n,t=1):
return (not n%t)+(n_divisors(n,t+1) if t < n else 0)
good luck on the test later ... better hit those books for real, go to class and take notes...
with just one input i guess
t=0
def n_divisors(n):
global t
t += 1
return (not n%t)+(n_divisors(n) if t < n else 0)

python recursion combination [duplicate]

This question already has answers here:
Understanding recursion [closed]
(20 answers)
Closed 5 months ago.
How can I write a function that computes:
C(n,k)= 1 if k=0
0 if n<k
C(n-1,k-1)+C(n-1,k) otherwise
So far I have:
def choose(n,k):
if k==0:
return 1
elif n<k:
return 0
else:
Assuming the missing operands in your question are subtraction operators (thanks lejlot), this should be the answer:
def choose(n,k):
if k==0:
return 1
elif n<k:
return 0
else:
return choose(n-1, k-1) + choose(n-1, k)
Note that on most Python systems, the max depth or recursion limit is only 1000. After that it will raise an Exception. You may need to get around that by converting this recursive function to an iterative one instead.
Here's an example iterative function that uses a stack to mimic recursion, while avoiding Python's maximum recursion limit:
def choose_iterative(n, k):
stack = []
stack.append((n, k))
combinations = 0
while len(stack):
n, k = stack.pop()
if k == 0:
combinations += 1
elif n<k:
combinations += 0 #or just replace this line with `pass`
else:
stack.append((n-1, k))
stack.append((n-1, k-1))
return combinations
Improving from Exponential to Linear time
All of the answers given so far run in exponential time O(2n). However, it's possible to make this run in O(k) by changing a single line of code.
Explanation:
The reason for the exponential running time is that each recursion separates the problem into overlapping subproblems with this line of code (see Ideone here):
def choose(n, k):
...
return choose(n-1, k-1) + choose(n-1, k)
To see why this is so bad consider the example of choose(500, 2). The numeric value of 500 choose 2 is 500*499/2; however, using the recursion above it takes 250499 recursive calls to compute that. Obviously this is overkill since only 3 operations are needed.
To improve this to linear time all you need to do is choose a different recursion which does not split into two subproblems (there are many on wikipedia).
For example the following recursion is equivalent, but only uses 3 recursive calls to compute choose(500,2) (see Ideone here):
def choose(n,k):
...
return ((n + 1 - k)/k)*choose(n, k-1)
The reason for the improvement is that each recursion has only one subproblem that reduces k by 1 with each call. This means that we are guaranteed that this recursion will only take k + 1 recursions or O(k). That's a vast improvement for changing a single line of code!
If you want to take this a step further, you could take advantage of the symmetry in "n choose k" to ensure that k <= n/2 (see Ideone here):
def choose(n,k):
...
k = k if k <= n/2 else n - k # if k > n/2 it's equivalent to k - n
return ((n + 1 - k)/k)*choose(n, k-1)
Solution from wikipedia (http://en.wikipedia.org/wiki/Binomial_coefficient)
def choose(n, k):
if k < 0 or k > n:
return 0
if k > n - k: # take advantage of symmetry
k = n - k
if k == 0 or n <= 1:
return 1
return choose(n-1, k) + choose(n-1, k-1)
You're trying to calculate the number of options to choose k out of n elements:
def choose(n,k):
if k == 0:
return 1 # there's only one option to choose zero items out of n
elif n < k:
return 0 # there's no way to choose k of n when k > n
else:
# The recursion: you can do either
# 1. choose the n-th element and then the rest k-1 out of n-1
# 2. or choose all the k elements out of n-1 (not choose the n-th element)
return choose(n-1, k-1) + choose(n-1, k)
just like this
def choose(n,k):
if k==0:
return 1
elif n<k:
return 0
else:
return choose(n-1,k-1)+choose(n-1,k)
EDIT
It is the easy way, for an efficient one take a look at wikipedia and spencerlyon2 answer

Categories