I've made two functions for computing the Fibonacci Sequence, one using recursion with memory and one using a loop;
def fib_rec(n, dic = {0 : 0, 1 : 1}):
if n in dic:
return dic[n]
else:
fib = fib_rec(n - 2, dic) + fib_rec(n - 1, dic)
dic[n] = fib
return fib
def fib_loop(n):
if n == 0 or n == 1:
return n
else:
smaller = 0
larger = 1
for i in range(1, n):
smaller, larger = larger, smaller + larger
return larger
I've heard that the Fibonacci Sequence often is solved using recursion, but I'm wondering why. Both my algorithms are of linear time complexity, but the one using a loop will not have to carry a dictionary of all past Fibonacci numbers, it also won't exceed Python's recursion depth.
Is this problem solved using recursion only to teach recursion or am I missing something?
The usual recursive O(N) Fibonacci implementation is more like this:
def fib(n, a=0, b=1):
if n == 0: return a
if n == 1: return b
return fib(n - 1, b, a + b)
The advantage with this approach (aside from the fact that it uses O(1) memory) is that it is tail-recursive: some compilers and/or runtimes can take advantage of that to secretly convert it to a simple JUMP instruction. This is called tail-call optimization.
Python, sadly, doesn't use this strategy, so it will use extra memory for the call stack, which as you noted quickly runs into Python's recursion depth limit.
The Fibonacci sequence is mostly a toy problem, used for teaching people how to write algorithms and about big Oh notation. It has elegant functional solutions as well as showing the strengths of dynamic programming (basically your dictionary-based solution), but it's also practically a solved problem.
We can also go a lot faster. The page https://www.nayuki.io/page/fast-fibonacci-algorithms describes how. It includes a fast doubling algorithm written in Python:
#
# Fast doubling Fibonacci algorithm (Python)
# by Project Nayuki, 2015. Public domain.
# https://www.nayuki.io/page/fast-fibonacci-algorithms
#
# (Public) Returns F(n).
def fibonacci(n):
if n < 0:
raise ValueError("Negative arguments not implemented")
return _fib(n)[0]
# (Private) Returns the tuple (F(n), F(n+1)).
def _fib(n):
if n == 0:
return (0, 1)
else:
a, b = _fib(n // 2)
c = a * (b * 2 - a)
d = a * a + b * b
if n % 2 == 0:
return (c, d)
else:
return (d, c + d)
Related
I tried a problem on project euler where I needed to find the sum of all the fibonacci terms under 4 million. It took me a long time but then I found out that I can use memoization to do it but it seems to take still a long time. After a lot of research, I found out that I can use a built-in module called lru_cache. My question is : why isn't it as fast as memoization ?
Here's my code:
from functools import lru_cache
#lru_cache(maxsize=1000000)
def fibonacci_memo(input_value):
global value
fibonacci_cache = {}
if input_value in fibonacci_cache:
return fibonacci_cache[input_value]
if input_value == 0:
value = 1
elif input_value == 1:
value = 1
elif input_value > 1:
value = fibonacci_memo(input_value - 1) + fibonacci_memo(input_value - 2)
fibonacci_cache[input_value] = value
return value
def sumOfFib():
SUM = 0
for n in range(500):
if fibonacci_memo(n) < 4000000:
if fibonacci_memo(n) % 2 == 0:
SUM += fibonacci_memo(n)
return SUM
print(sumOfFib())
The code works by the way. It takes less than a second to run it when I use the lru_cache module.
The other answer is the correct way to calculate the fibonacci sequence, indeed, but you should also know why your memoization wasn't working. To be specific:
fibonacci_cache = {}
This line being inside the function means you were emptying your cache every time fibonacci_memo was called.
You shouldn't be computing the Fibonacci sequence, not even by dynamic programming. Since the Fibonacci sequence satisfies a linear recurrence relation with constant coefficients and constant order, then so will be the sequence of their sums.
Definitely don't cache all the values. That will give you an unnecessary consumption of memory. When the recurrences have constant order, you only need to remember as many previous terms as the order of the recurrence.
Further more, there is a way to turn recurrences of constant order into systems recurrences of order one. The solution of the latter is given by a power of a matrix. This gives a faster algorithm, for large values of n. Each step will be more expensive, though. So, the best method would use a combination of the two, choosing the first method for small values of n and the latter for large inputs.
O(n) using the recurrence for the sum
Denote S_n=F_0+F_1+...+F_n the sum of the first Fibonacci numbers F_0,F_1,...,F_n.
Observe that
S_{n+1}-S_n=F_{n+1}
S_{n+2}-S_{n+1}=F_{n+2}
S_{n+3}-S_{n+2}=F_{n+3}
Since F_{n+3}=F_{n+2}+F_{n+1} we get that S_{n+3}-S_{n+2}=S_{n+2}-S_n. So
S_{n+3}=2S_{n+2}-S_n
with the initial conditions S_0=F_0=1, S_1=F_0+F_1=1+1=2, and S_2=S_1+F_2=2+2=4.
One thing that you can do is compute S_n bottom up, remembering the values of only the previous three terms at each step. You don't need to remember all of the values of S_k, from k=0 to k=n. This gives you an O(n) algorithm with O(1) amount of memory.
O(ln(n)) by matrix exponentiation
You can also get an O(ln(n)) algorithm in the following way:
Call X_n to be the column vector with components S_{n+2},S_{n+1},S_{n}
So, the recurrence above gives the recurrence
X_{n+1}=AX_n
where A is the matrix
[
[2,0,-1],
[1,0,0],
[0,1,0],
]
Therefore, X_n=A^nX_0. We have X_0. To multiply by A^n we can do exponentiation by squaring.
For the sake of completeness here are implementations of the general ideas described in #NotDijkstra's answer plus my humble optimizations including the "closed form" solution implemented in integer arithmetic.
We can see that the "smart" methods are not only an order of magnitude faster but also seem to scale better compatible with the fact (thanks #NotDijkstra) that Python big ints use better than naive multiplication.
import numpy as np
import operator as op
from simple_benchmark import BenchmarkBuilder, MultiArgument
B = BenchmarkBuilder()
def pow(b,e,mul=op.mul,unit=1):
if e == 0:
return unit
res = b
for bit in bin(e)[3:]:
res = mul(res,res)
if bit=="1":
res = mul(res,b)
return res
def mul_fib(a,b):
return (a[0]*b[0]+5*a[1]*b[1])>>1 , (a[0]*b[1]+a[1]*b[0])>>1
def fib_closed(n):
return pow((1,1),n+1,mul_fib)[1]
def fib_mat(n):
return pow(np.array([[1,1],[1,0]],'O'),n,op.matmul)[0,0]
def fib_sequential(n):
t1,t2 = 1,1
for i in range(n-1):
t1,t2 = t2,t1+t2
return t2
def sum_fib_direct(n):
t1,t2,res = 1,1,1
for i in range(n):
t1,t2,res = t2,t1+t2,res+t2
return res
def sum_fib(n,method="closed"):
if method == "direct":
return sum_fib_direct(n)
return globals()[f"fib_{method}"](n+2)-1
methods = "closed mat sequential direct".split()
def f(method):
def f(n):
return sum_fib(n,method)
f.__name__ = method
return f
for method in methods:
B.add_function(method)(f(method))
B.add_arguments('N')(lambda:(2*(1<<k,) for k in range(23)))
r = B.run()
r.plot()
import matplotlib.pylab as P
P.savefig(fib.png)
I am not sure how you are taking anything near a second. Here is the memoized version without fanciness:
class fibs(object):
def __init__(self):
self.thefibs = {0:0, 1:1}
def __call__(self, n):
if n not in self.thefibs:
self.thefibs[n] = self(n-1)+self(n-2)
return self.thefibs[n]
dog = fibs()
sum([dog(i) for i in range(40) if dog(i) < 4000000])
I wrote this python code to give the Harmonic Series of a certain value n both recursively and iteratively. Here is the functions:
def H(n):
if (n <= 0): raise ValueError("n must be bigger than 0.")
if (n == 1): return 1
else: return sum([1/x for x in range(1,n+1)])
def H_rec(n):
if (n <= 0): raise ValueError("n must be bigger than 0.")
if (n == 1): return 1
else: return 1/n + H(n-1)
Then, when I run the code 10 times for each, I get the following times:
RECURSIVE TIMES [22.755768060684204, 17.231882095336914, 14.965636014938354, 14.277509927749634, 14.0553719997406, 13.788002014160156, 13.338942766189575, 13.72638201713562, 14.690818071365356, 14.236813068389893]
RECURSIVE MEAN: 15.30671260356903
ITERATIVE TIMES [15.093524932861328, 12.801156759262085, 13.350629091262817, 13.806081056594849, 13.29387378692627, 13.876484870910645, 12.934008121490479, 13.859009981155396, 13.350301027297974, 13.590226888656616]
ITERATIVE MEAN: 13.595529651641845
The code is supposed to find H_rec(100000000) and H(100000000), which are fairly big numbers.
I don't understand exactly why the recursive definition takes longer when the way it's defined seems to require less computation. The iterative one requires forming a list and finding its sum, while the recursive one just sums 1/n + H(n-1).
What is so misleading about this recursion? Why is it slow?
Your recursive function is calling the iterative one in else: return 1/n + H(n-1), you need to modify it as the following:
def H_rec(n):
if (n <= 0): raise ValueError("n must be bigger than 0.")
if (n == 1): return 1
else: return 1/n + H_rec(n-1) #Fix this line
Code executed inside the Python interpreter is fastest. Python code (which is compiled to Python byte code that is interpreted by a virtual machine) is slower. User-defined function calls are slowest of all, because the virtual machine has to manage its own call stack to track the execution environments.
Consider the following code:
def S1(n):
return sum(range(1,n))
def S2(n):
rv = 0
for i in range(1,n):
rv += i
return rv
def S3(n):
if n == 0:
return 0
else:
return n + S3(n-1)
S1 is the fastest; as much work as possible is pushed into the interpreter. S2 is slower because now each addition is a separate Python instruction to be interpreted. S3 is slowest because now each addition involves another function call to get its second operand; as noted before, function calls are slow in Python.
>>> print(timeit.timeit('S1(50)', globals=globals()))
1.2118524569996225
>>> print(timeit.timeit('S2(50)', globals=globals()))
3.262354401002085
>>> print(timeit.timeit('S3(50)', globals=globals()))
10.102635376999388
I am a bit of a novice when it comes to Python (I just started learning it two weeks ago) but am having a lot of fun completing challenges. One challenge that is giving me trouble is a variation on the Fibonacci sequence with mortal rabbits (i.e. the Padovan sequence, but with variable lifespan).
After much trial and error, I have written a code that returns outputs that match with the tables I've made for different lifespan settings. However, the algorithm gets really slow at around 40 turns and for lifespans over 15 months, and the challenges are timed. The code I made is:
def fib(n):
if n == 1:
return 1
elif n == 2:
return 1
else:
return fib(n-1) + fib(n-2)
def fibd(n, m):
if n < (m+2) and m > 3:
return fib(n)
elif n < (m+1) and m == 3:
return fib(n)
elif n >= (m+1) and m==3:
return fibd(n-2, m) + fibd(n-3, m)
elif n >= (m+2) and m > 3:
return sum(map(fibd, [n-(m+x) for x in (range(-(m-2), 2))], [m]*m))
That's as simple as I could make it with my modest skillset, but I'm at a loss. n represents the number of turns where we stop counting, and m represents the number of turns each rabbit lives before dying. Basically, the first function gives the Fibonacci sequence. This is used because until the rabbits actually die, the Padovan generalization is identical to a Fibonacci sequence, so I need to call back to it as part of the recursion. Then, there is a separate chunk of code for a lifespan of 3, because while for lifespans of 4 or more, the formula for obtaining the number of rabbits on the nth turn is (n-2)+(n-3)...(n-(m+1)), with a lifespan of 3, the formula is different (it's just (n-2)+(n-3). The next chunk of code is for lifespans of 4 or greater.
Any input would be appreciated, even some hints if you don't have a total solution. I am interested in learning how to optimize just as much as I am interested in solving this particular problem, since this isn't for work or school but rather for my own edification.
Both your functions have exponential time complexity, so they will be very slow. You will need to find a better algorithm. While you are thinking, you might want to run this program in Python, say fibs(50), and see what it does:
def fibs(n):
a,b = 1,1
while n > 0:
print a
a,b,n = b,a+b,n-1
Below is a rework of your code with a better recursive Fibonacci function (though not as fast as iterative) and memoizing, which #JohnY couldn't fit in his comment, from the blog Memoized recursive fibonacci in Python
The upgraded Fibonacci function alone improves the speed by about 20x. The memoize decorator, which caches values so they don't get recomputed, improves the speed by 40x. (My rewrite of fibd() makes no difference performance-wise, sigh.)
def memoize(f):
cache = {}
return lambda *args: cache[args] if args in cache else cache.update({args: f(*args)}) or cache[args]
#memoize
def fib(n, res=0, nxt=1):
if n == 0:
return res
return fib(n - 1, nxt, res + nxt)
#memoize
def fibd(n, m):
if m > 3:
if n < (m + 2):
return fib(n)
return sum(map(fibd, (n - (m + x) for x in range(2 - m, 2)), [m] * m))
elif m == 3:
if n < (m + 1):
return fib(n)
return fibd(n - 2, m) + fibd(n - 3, m)
print(fibd(40, 15))
Overall performance went from 47 seconds to 1/20th of a second on my system. However, I only tested the fibd(40, 15) case, you'll need to do more testing of this modified code.
I've written out a recursive algorithm for a little homegrown computer algebra system, where I'm applying pairwise reductions to the list of operands of an algebraic operation (adjacent operands only, as the algebra is non-commutative). I'm trying to get an idea of the runtime complexity of my algorithm (but unfortunately, as a physicist it's been a very long time since I took any undergrad CS courses that dealt with complexity analysis). Without going into details of the specific problem, I think I can formalize the algorithm in terms of a function f that is a "divide" step and a function g that combines the results. My algorithm would then take the following formal representation:
f(1) = 1 # recursion anchor for f
f(n) = g(f(n/2), f(n/2))
g(n, 0) = n, g(0, m) = m # recursion ...
g(1, 0) = g(0, 1) = 1 # ... anchors for g
/ g(g(n-1, 1), m-1) if reduction is "non-neutral"
g(n, m) = | g(n-1, m-1) if reduction is "neutral"
\ n + m if no reduction is possible
In this notation, the functions f and g receive lists as arguments and return lists, with the length of the input/output lists being the argument and the right-hand-side of the equations above.
For the full story, the actual code corresponding to f and g is the following:
def _match_replace_binary(cls, ops: list) -> list:
"""Reduce list of `ops`"""
n = len(ops)
if n <= 1:
return ops
ops_left = ops[:n//2]
ops_right = ops[n//2:]
return _match_replace_binary_combine(
cls,
_match_replace_binary(cls, ops_left),
_match_replace_binary(cls, ops_right))
def _match_replace_binary_combine(cls, a: list, b: list) -> list:
"""combine two fully reduced lists a, b"""
if len(a) == 0 or len(b) == 0:
return a + b
if len(a) == 1 and len(b) == 1:
return a + b
r = _get_binary_replacement(a[-1], b[0], cls._binary_rules)
if r is None:
return a + b
if r == cls.neutral_element:
return _match_replace_binary_combine(cls, a[:-1], b[1:])
r = [r, ]
return _match_replace_binary_combine(
cls,
_match_replace_binary_combine(cls, a[:-1], r),
b[1:])
I'm interested in the worst-case number of times get_binary_replacement is
called, depending on the size of ops
So I think I've got it now. To restate the problem: find the number of calls to _get_binary_replacement when calling _match_replace_binary with an input of size n.
define function g(n, m) (as in original question) that maps the size of the the two inputs of _match_replace_binary_combine to the size of the output
define a function T_g(n, m) that maps the size of the two inputs of _match_replace_binary_combine to the total number of calls to g that is required to obtain the result. This is also the (worst case) number of calls to _get_binary_replacement as each call to _match_replace_binary_combine calls _get_binary_replacement at most once
We can now consider the worst case and best case for g:
best case (no reduction): g(n,m) = n + m, T_g(n, m) = 1
worst case (all non-neutral reduction): g(n, m) = 1, T_g(n, m) = 2*(n+m) - 1 (I determined this empirically)
Now, the master theorem (WP) applies:
Going through the description on WP:
k=1 (the recursion anchor is for size 1)
We split into a = 2 subproblems of size n/2 in constant (d = 1) time
After solving the subproblems, the amount of work required to combine the results is c = T_g(n/2, n/2). This is n-1 (approximately n) in the worst case and 1 in the best case
Thus, following the examples on the WP page for the master theorem, the worst case complexity is n * log(n), and the best case complexity is n
Empirical trials seem to bear out this result. Any objections to my line of reasoning?
I know there's already a question similar to this, but I want to speed it up using GMPY2 (or something similar with GMP).
Here is my current code, it's decent but can it be better?
Edit: new code, checks divisors 2 and 3
def factors(n):
result = set()
result |= {mpz(1), mpz(n)}
def all_multiples(result, n, factor):
z = mpz(n)
while gmpy2.f_mod(mpz(z), factor) == 0:
z = gmpy2.divexact(z, factor)
result |= {mpz(factor), z}
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, gmpy2.isqrt(n) + 1, 6):
i1 = mpz(i) + 1
i2 = mpz(i) + 5
div1, mod1 = gmpy2.f_divmod(n, i1)
div2, mod2 = gmpy2.f_divmod(n, i2)
if mod1 == 0:
result |= {i1, div1}
if mod2 == 0:
result |= {i2, div2}
return result
If it's possible, I'm also interested in an implementation with divisors only within n^(1/3) and 2^(2/3)*n(1/3)
As an example, mathematica's factor() is much faster than the python code. I want to factor numbers between 20 and 50 decimal digits. I know ggnfs can factor these in less than 5 seconds.
I am interested if any module implementing fast factorization exists in python too.
I just made some quick changes to your code to eliminate redundant name lookups. The algorithm is still the same but it is about twice as fast on my computer.
import gmpy2
from gmpy2 import mpz
def factors(n):
result = set()
n = mpz(n)
for i in range(1, gmpy2.isqrt(n) + 1):
div, mod = divmod(n, i)
if not mod:
result |= {mpz(i), div}
return result
print(factors(12345678901234567))
Other suggestions will need more information about the size of the numbers, etc. For example, if you need all the possible factors, it may be faster to construct those from all the prime factors. That approach will let you decrease the limit of the range statement as you proceed and also will let you increment by 2 (after removing all the factors of 2).
Update 1
I've made some additional changes to your code. I don't think your all_multiplies() function is correct. Your range() statement isn't optimal since 2 is check again but my first fix made it worse.
The new code delays computing the co-factor until it knows the remainder is 0. I also tried to use the built-in functions as much as possible. For example, mpz % integer is faster than gmpy2.f_mod(mpz, integer) or gmpy2.f_mod(integer, mpz) where integer is a normal Python integer.
import gmpy2
from gmpy2 import mpz, isqrt
def factors(n):
n = mpz(n)
result = set()
result |= {mpz(1), n}
def all_multiples(result, n, factor):
z = n
f = mpz(factor)
while z % f == 0:
result |= {f, z // f}
f += factor
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, isqrt(n) + 1, 6):
i1 = i + 1
i2 = i + 5
if not n % i1:
result |= {mpz(i1), n // i1}
if not n % i2:
result |= {mpz(i2), n // i2}
return result
print(factors(12345678901234567))
I would change your program to just find all the prime factors less than the square root of n and then construct all the co-factors later. Then you decrease n each time you find a factor, check if n is prime, and only look for more factors if n isn't prime.
Update 2
The pyecm module should be able to factor the size numbers you are trying to factor. The following example completes in about a second.
>>> import pyecm
>>> list(pyecm.factors(12345678901234567890123456789012345678901, False, True, 10, 1))
[mpz(29), mpz(43), mpz(43), mpz(55202177), mpz(2928109491677), mpz(1424415039563189)]
There exist different Python factoring modules in the Internet. But if you want to implement factoring yourself (without using external libraries) then I can suggest quite fast and very easy to implement Pollard-Rho Algorithm. I implemented it fully in my code below, you just scroll down directly to my code (at the bottom of answer) if you don't want to read.
With great probability Pollard-Rho algorithm finds smallest non-trivial factor P (not equal to 1 or N) within time of O(Sqrt(P)). To compare, Trial Division algorithm that you implemented in your question takes O(P) time to find factor P. It means for example if a prime factor P = 1 000 003 then trial division will find it after 1 000 003 division operations, while Pollard-Rho on average will find it just after 1 000 operations (Sqrt(1 000 003) = 1 000), which is much much faster.
To make Pollard-Rho algorithm much faster we should be able to detect prime numbers, to exclude them from factoring and don't wait unnecessarily time, for that in my code I used Fermat Primality Test which is very fast and easy to implement within just 7-9 lines of code.
Pollard-Rho algorithm itself is very short, 13-15 lines of code, you can see it at the very bottom of my pollard_rho_factor() function, the remaining lines of code are supplementary helpers-functions.
I implemented all algorithms from scratch without using extra libraries (except random module). That's why you can see my gcd() function there although you can use built-in Python's math.gcd() instead (which finds Greatest Common Divisor).
You can see function Int() in my code, it is used just to convert Python's integers to GMPY2. GMPY2 ints will make algorithm faster, you can just use Python's int(x) instead. I didn't use any specific GMPY2 function, just converted all ints to GMPY2 ints to have around 50% speedup.
As an example I factor first 190 digits of Pi!!! It takes 3-15 seconds to factor them. Pollard-Rho algorithm is randomized hence it takes different time to factor same number on each run. You can restart program again and see that it will print different running time.
Of course factoring time depends greatly on size of prime divisors. Some 50-200 digits numbers can be factoring within fraction of second, some will take months. My example 190 digits of Pi has quite small prime factors, except largest one, that's why it is fast. Other digits of Pi may be not that fast to factor. So digit-size of number doesn't matter very much, only size of prime factors matter.
I intentionally implemented pollard_rho_factor() function as one standalone function, without breaking it into smaller separate functions. Although it breaks Python's style guide, which (as I remember) suggests not to have nested functions and place all possible functions at global scope. Also style guide suggests to do all imports at global scope in first lines of script. I did single function intentionally so that it is easy copy-pastable and fully ready to use in your code. Fermat primality test is_fermat_probable_prime() sub-function is also copy pastable and works without extra dependencies.
In very rare cases Pollard-Rho algorithm may fail to find non-trivial prime factor, especially for very small factors, for example you can replace n inside test() with small number 4 and see that Pollard-Rho fails. For such small failed factors you can easily use your Trial Division algorithm that you implemented in your question.
Try it online!
def pollard_rho_factor(N, *, trials = 16):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
def Int(x):
import gmpy2
return gmpy2.mpz(x) # int(x)
def is_fermat_probable_prime(n, *, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def gcd(a, b):
# https://en.wikipedia.org/wiki/Greatest_common_divisor
# https://en.wikipedia.org/wiki/Euclidean_algorithm
while b != 0:
a, b = b, a % b
return a
def found(f, prime):
print(f'Found {("composite", "prime")[prime]} factor, {math.log2(f):>7.03f} bits... {("Pollard-Rho failed to fully factor it!", "")[prime]}')
return f
N = Int(N)
if N <= 1:
return []
if is_fermat_probable_prime(N):
return [found(N, True)]
for j in range(trials):
i, stage, y, x = 0, 2, Int(1), Int(random.randint(1, N - 2))
while True:
r = gcd(N, abs(x - y))
if r != 1:
break
if i == stage:
y = x
stage <<= 1
x = (x * x + 1) % N
i += 1
if r != N:
return sorted(pollard_rho_factor(r) + pollard_rho_factor(N // r))
return [found(N, False)] # Pollard-Rho failed
def test():
import time
# http://www.math.com/tables/constants/pi.htm
# pi = 3.
# 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679
# 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196
# n = first 190 fractional digits of Pi
n = 1415926535_8979323846_2643383279_5028841971_6939937510_5820974944_5923078164_0628620899_8628034825_3421170679_8214808651_3282306647_0938446095_5058223172_5359408128_4811174502_8410270193_8521105559_6446229489
tb = time.time()
print('N:', n)
print('Factors:', pollard_rho_factor(n))
print(f'Time: {time.time() - tb:.03f} sec')
test()
Output:
N: 1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489
Found prime factor, 1.585 bits...
Found prime factor, 6.150 bits...
Found prime factor, 20.020 bits...
Found prime factor, 27.193 bits...
Found prime factor, 28.311 bits...
Found prime factor, 545.087 bits...
Factors: [mpz(3), mpz(71), mpz(1063541), mpz(153422959), mpz(332958319), mpz(122356390229851897378935483485536580757336676443481705501726535578690975860555141829117483263572548187951860901335596150415443615382488933330968669408906073630300473)]
Time: 2.963 sec