Fermat Factorisation with Python - python

New to Python and not sure why my fermat factorisation method is failing? I think it may have something to do with the way large numbers are being implemented but I don't know enough about the language to determine where I'm going wrong.
The code below works when n=p*q is made with p and q extremely close (as in within about 20 of each other) but seems to run forever if they are further apart. For example, with n=991*997 the code works correctly and executes in <1s, likewise for n=104729*104659. If I change it ton=103591*104659 however, it just runs forever (well, I let it go 2 hours then stopped it).
Any points in the right direction would be greatly appreciated!
Code:
import math
def isqrt(n):
x = n
y = (x + n // x) // 2
while y < x:
x = y
y = (x + n // x) // 2
return x
n=103591*104729
a=isqrt(n) + 1
b2=a*a - n
b=isqrt(b2)
while b*b!=b2:
a=a+1
b2=b2+2*a+1
b=isqrt(b2)
p=a+b
q=a-b
print('a=',a,'\n')
print('b=',b,'\n')
print('p=',p,'\n')
print('q=',q,'\n')
print('pq=',p*q,'\n')
print('n=',n,'\n')
print('diff=',n-p*q,'\n')

I looked up the algorithm on Wikipedia and this works for me:
#from math import ceil
def isqrt(n):
x = n
y = (x + n // x) // 2
while y < x:
x = y
y = (x + n // x) // 2
return x
def fermat(n, verbose=True):
a = isqrt(n) # int(ceil(n**0.5))
b2 = a*a - n
b = isqrt(n) # int(b2**0.5)
count = 0
while b*b != b2:
if verbose:
print('Trying: a=%s b2=%s b=%s' % (a, b2, b))
a = a + 1
b2 = a*a - n
b = isqrt(b2) # int(b2**0.5)
count += 1
p=a+b
q=a-b
assert n == p * q
print('a=',a)
print('b=',b)
print('p=',p)
print('q=',q)
print('pq=',p*q)
return p, q
n=103591*104729
fermat(n)
I tried a couple test cases. This one is from the wikipedia page:
>>> fermat(5959)
Trying: a=78 b2=125 b=11
Trying: a=79 b2=282 b=16
a= 80
b= 21
p= 101
q= 59
pq= 5959
(101, 59)
This one is your sample case:
>>> fermat(103591*104729)
Trying: a=104159 b2=115442 b=339
a= 104160
b= 569
p= 104729
q= 103591
pq= 10848981839
(104729, 103591)
Looking at the lines labeled "Trying" shows that, in both cases, it converges quite quickly.
UPDATE: Your very long integer from the comments factors as follows:
n_long=316033277426326097045474758505704980910037958719395560565571239100878192955228495343184968305477308460190076404967552110644822298179716669689426595435572597197633507818204621591917460417859294285475630901332588545477552125047019022149746524843545923758425353103063134585375275638257720039414711534847429265419
fermat(n_long, verbose=False)
a= 17777324810733646969488445787976391269105128850805128551409042425916175469326288448917184096591563031034494377135896478412527365012246902424894591094668262
b= 157517855001095328119226302991766503492827415095855495279739107269808590287074235
p= 17777324810733646969488445787976391269105128850805128551409042425916175469483806303918279424710789334026260880628723893508382860291986009694703181381742497
q= 17777324810733646969488445787976391269105128850805128551409042425916175469168770593916088768472336728042727873643069063316671869732507795155086000807594027
pq= 316033277426326097045474758505704980910037958719395560565571239100878192955228495343184968305477308460190076404967552110644822298179716669689426595435572597197633507818204621591917460417859294285475630901332588545477552125047019022149746524843545923758425353103063134585375275638257720039414711534847429265419

The error was doing the addition after incremeting a so the new value was not the square of a.
This works as intended :
while b*b!=b2:
b2+=2*a+1
a=a+1
b=isqrt(b2)
for big numbers it should be faster than computing the square which has quite a greater number of digits.

Related

Let n be a square number. Using Python, how we can efficiently calculate natural numbers y up to a limit l such that n+y^2 is again a square number?

Using Python, I would like to implement a function that takes a natural number n as input and outputs a list of natural numbers [y1, y2, y3, ...] such that n + y1*y1 and n + y2*y2 and n + y3*y3 and so forth is again a square.
What I tried so far is to obtain one y-value using the following function:
def find_square(n:int) -> tuple[int, int]:
if n%2 == 1:
y = (n-1)//2
x = n+y*y
return (y,x)
return None
It works fine, eg. find_square(13689) gives me a correct solution y=6844. It would be great to have an algorithm that yields all possible y-values such as y=44 or y=156.
Simplest slow approach is of course for given N just to iterate all possible Y and check if N + Y^2 is square.
But there is a much faster approach using integer Factorization technique:
Lets notice that to solve equation N + Y^2 = X^2, that is to find all integer pairs (X, Y) for given fixed integer N, we can rewrite this equation to N = X^2 - Y^2 = (X + Y) * (X - Y) which follows from famous school formula of difference of squares.
Now lets rename two factors as A, B i.e. N = (X + Y) * (X - Y) = A * B, which means that X = (A + B) / 2 and Y = (A - B) / 2.
Notice that A and B should be of same odditiy, either both odd or both even, otherwise in last formulas above we can't have whole division by 2.
We will factorize N into all possible pairs of two factors (A, B) of same oddity. For fast factorization in code below I used simple to implement but yet quite fast algorithm Pollard Rho, also two extra algorithms were needed as a helper to Pollard Rho, one is Fermat Primality Test (which allows fast checking if number is probably prime) and second is Trial Division Factorization (which helps Pollard Rho to factor out small factors, which could cause Pollard Rho to fail).
Pollard Rho for composite number has time complexity O(N^(1/4)) which is very fast even for 64-bit numbers. Any faster factorization algorithm can be chosen if needed a bigger space to be searched. My fast algorithm time is dominated by speed of factorization, remaining part of algorithm is blazingly fast, just few iterations of loop with simple formulas.
If your N is a square itself (hence we know its root easily), then Pollard Rho can factor N even much faster, within O(N^(1/8)) time. Even for 128-bit numbers it means very small time, 2^16 operations, and I hope you're solving your task for less than 128 bit numbers.
If you want to process a range of possible N values then fastest way to factorize them is to use techniques similar to Sieve of Erathosthenes, using set of prime numbers, it allows to compute all factors for all N numbers within some range. Using Sieve of Erathosthenes for the case of range of Ns is much faster than factorizing each N with Pollard Rho.
After factoring N into pairs (A, B) we compute (X, Y) based on (A, B) by formulas above. And output resulting Y as a solution of fast algorithm.
Following code as an example is implemented in pure Python. Of course one can use Numba to speed it up, Numba usually gives 30-200 times speedup, for Python it achieves same speed as optimized C++. But I thought that main thing here is to implement fast algorithm, Numba optimizations can be done easily afterwards.
I added time measurement into following code. Although it is pure Python still my fast algorithm achieves 8500x times speedup compared to regular brute force approach for limit of 1 000 000.
You can change limit variable to tweak amount of searched space, or num_tests variable to tweak amount of different tests.
Following code implements both solutions - fast solution find_fast() described above plus very tiny brute force solution find_slow() which is very slow as it scans all possible candidates. This slow solution is only used to compare correctness in tests and compare speedup.
Code below uses nothing except few standard Python library modules, no external modules were used.
Try it online!
def find_slow(N):
import math
def is_square(x):
root = int(math.sqrt(float(x)) + 0.5)
return root * root == x, root
l = []
for y in range(N):
if is_square(N + y ** 2)[0]:
l.append(y)
return l
def find_fast(N):
import itertools, functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = factor(N)
mfs = {}
for e in fs:
mfs[e] = mfs.get(e, 0) + 1
fs = sorted(mfs.items())
del mfs
Ys = set()
for take_a in itertools.product(*[
(range(v + 1) if k != 2 else range(1, v)) for k, v in fs]):
A = Prod([p ** t for (p, _), t in zip(fs, take_a)])
B = N // A
assert A * B == N, (N, A, B, take_a)
if A < B:
continue
X = (A + B) // 2
Y = (A - B) // 2
assert N + Y ** 2 == X ** 2, (N, A, B, X, Y)
Ys.add(Y)
return sorted(Ys)
def trial_div_factor(n, limit = None):
# https://en.wikipedia.org/wiki/Trial_division
fs = []
while n & 1 == 0:
fs.append(2)
n >>= 1
all_checked = False
for d in range(3, (limit or n) + 1, 2):
if d * d > n:
all_checked = True
break
while True:
q, r = divmod(n, d)
if r != 0:
break
fs.append(d)
n = q
if n > 1 and all_checked:
fs.append(n)
n = 1
return fs, n
def fermat_prp(n, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def pollard_rho_factor(n):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
fs, n = trial_div_factor(n, 1 << 7)
if n <= 1:
return fs
if fermat_prp(n):
return sorted(fs + [n])
for itry in range(8):
failed = False
x = random.randint(2, n - 2)
for cycle in range(1, 1 << 60):
y = x
for i in range(1 << cycle):
x = (x * x + 1) % n
d = math.gcd(x - y, n)
if d == 1:
continue
if d == n:
failed = True
break
return sorted(fs + pollard_rho_factor(d) + pollard_rho_factor(n // d))
if failed:
break
assert False, f'Pollard Rho failed! n = {n}'
def factor(N):
import functools
Prod = lambda it: functools.reduce(lambda a, b: a * b, it, 1)
fs = pollard_rho_factor(N)
assert N == Prod(fs), (N, fs)
return sorted(fs)
def test():
import random, time
limit = 1 << 20
num_tests = 20
t0, t1 = 0, 0
for i in range(num_tests):
if (round(i / num_tests * 1000)) % 100 == 0 or i + 1 >= num_tests:
print(f'test {i}, ', end = '', flush = True)
N = random.randrange(limit)
tb = time.time()
r0 = find_slow(N)
t0 += time.time() - tb
tb = time.time()
r1 = find_fast(N)
t1 += time.time() - tb
assert r0 == r1, (N, r0, r1, t0, t1)
print(f'\nTime slow {t0:.05f} sec, fast {t1:.05f} sec, speedup {round(t0 / max(1e-6, t1))} times')
if __name__ == '__main__':
test()
Output:
test 0, test 2, test 4, test 6, test 8, test 10, test 12, test 14, test 16, test 18, test 19,
Time slow 26.28198 sec, fast 0.00301 sec, speedup 8732 times
For the easiest solution, you can try this:
import math
n=13689 #or we can ask user to input a square number.
for i in range(1,9999):
if math.sqrt(n+i**2).is_integer():
print(i)

numpy precision with large numbers

I want to factorize a large number using Fermat's factorization method. This is how I implemented it:
import numpy as np
def fac(n):
x = np.ceil(np.sqrt(n))
y = x*x - n
while not np.sqrt(y).is_integer():
x += 1
y = x*x - n
return(x + np.sqrt(y), x - np.sqrt(y))
Using this method I want to factor N into its components. Note that N=p*q, where p and q are prime.
I chose the following values to compute N:
p = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533846141.0
q = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533845933.0
and defined N
N = p*q
Now I factor N:
r = fac(n)
However, the factorization seems to not be correct:
int(r[0])*int(r[1]) == N
It does work for smaller ints:
fac(65537)
Out[1]: (65537.0, 1.0)
I'm quite sure the reason is numerical precision at some point.
I tried calculating N in numpy using object types:
N = np.dot(np.array(p).astype(object), np.array(q).astype(object))
but it doesn't help. Still, the numpy requires a float for the sqrt function.
I also tried using the math library instead of numpy, this library seems to not require a float for its sqrt function, but ultimately running into precision issues as well.
Python int are multiple precision numbers. But numpy is a wrapper around C low level libraries to speed up operations. The downside is that it cannot handle those multi-precision numbers. Worse, if you try to use np.sqrt on them, they will be converted to floating point numbers (C double or numpy float64) what have a precision of about 15 decimal digits.
But as Python int type is already a multiprecision type, you could use math.sqrt to get an approximative value of the true square root, and then use Newton to find a closer value:
def isqrt(n):
x = int(math.sqrt(n))
old = None
while True:
d = (n - x * x) // (2 * x)
if d == 0: break
if d == 1: # infinite loop prevention
if old is None:
old = 1
else: break
x += d
return x
Using it, your fac function could become:
def fac(n):
x = isqrt(n)
if x*x < n: x += 1
y = x * x - n
while True:
z = isqrt(y)
if z*z == y: break
x += 1
y = x*x -n
return x+z, x-z
Demo:
p = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533846141
q = 34058934059834598495823984675767545695711020949846845989934523432842834738974239847294083409583495898523872347284789757987987387543533845933
N = p*q
print(fac(N) == (p,q))
prints as expected True

Karatsuba Multiplication Implementation

I recently implemented Karatsuba Multiplication as a personal exercise. I wrote my implementation in Python following the pseudocode provided on wikipedia:
procedure karatsuba(num1, num2)
if (num1 < 10) or (num2 < 10)
return num1*num2
/* calculates the size of the numbers */
m = max(size_base10(num1), size_base10(num2))
m2 = m/2
/* split the digit sequences about the middle */
high1, low1 = split_at(num1, m2)
high2, low2 = split_at(num2, m2)
/* 3 calls made to numbers approximately half the size */
z0 = karatsuba(low1, low2)
z1 = karatsuba((low1+high1), (low2+high2))
z2 = karatsuba(high1, high2)
return (z2*10^(2*m2)) + ((z1-z2-z0)*10^(m2)) + (z0)
Here is my python implementation:
def karat(x,y):
if len(str(x)) == 1 or len(str(y)) == 1:
return x*y
else:
m = max(len(str(x)),len(str(y)))
m2 = m / 2
a = x / 10**(m2)
b = x % 10**(m2)
c = y / 10**(m2)
d = y % 10**(m2)
z0 = karat(b,d)
z1 = karat((a+b),(c+d))
z2 = karat(a,c)
return (z2 * 10**(2*m2)) + ((z1 - z2 - z0) * 10**(m2)) + (z0)
My question is about final merge of z0, z1, and z2.
z2 is shifted m digits over (where m is the length of the largest of two multiplied numbers).
Instead of simply multiplying by 10^(m), the algorithm uses *10^(2*m2)* where m2 is m/2.
I tried replacing 2*m2 with m and got incorrect results. I think this has to do with how the numbers are split but I'm not really sure what's going on.
Depending on your Python version you must or should replace / with the explicit floor division operator // which is the appropriate here; it rounds down ensuring that your exponents remain entire numbers.
This is essential for example when splitting your operands in high digits (by floor dividing by 10^m2) and low digits (by taking the residual modulo 10^m2) this would not work with a fractional m2.
It also explains why 2 * (x // 2) does not necessarily equal x but rather x-1 if x is odd.
In the last line of the algorithm 2 m2 is correct because what you are doing is giving a and c their zeros back.
If you are on an older Python version your code may still work because / used to be interpreted as floor division when applied to integers.
def karat(x,y):
if len(str(x)) == 1 or len(str(y)) == 1:
return x*y
else:
m = max(len(str(x)),len(str(y)))
m2 = m // 2
a = x // 10**(m2)
b = x % 10**(m2)
c = y // 10**(m2)
d = y % 10**(m2)
z0 = karat(b,d)
z1 = karat((a+b),(c+d))
z2 = karat(a,c)
return (z2 * 10**(2*m2)) + ((z1 - z2 - z0) * 10**(m2)) + (z0)
i have implemented the same idea but i have restricted to the 2 digit multiplication as the base case because i can reduce float multiplication in function
import math
def multiply(x,y):
sx= str(x)
sy= str(y)
nx= len(sx)
ny= len(sy)
if ny<=2 or nx<=2:
r = int(x)*int(y)
return r
n = nx
if nx>ny:
sy = sy.rjust(nx,"0")
n=nx
elif ny>nx:
sx = sx.rjust(ny,"0")
n=ny
m = n%2
offset = 0
if m != 0:
n+=1
offset = 1
floor = int(math.floor(n/2)) - offset
a = sx[0:floor]
b = sx[floor:n]
c = sy[0:floor]
d = sy[floor:n]
print(a,b,c,d)
ac = multiply(a,c)
bd = multiply(b,d)
ad_bc = multiply((int(a)+int(b)),(int(c)+int(d)))-ac-bd
r = ((10**n)*ac)+((10**(n/2))*ad_bc)+bd
return r
print(multiply(4,5))
print(multiply(4,58779))
print(int(multiply(4872139874092183,5977098709879)))
print(int(4872139874092183*5977098709879))
print(int(multiply(4872349085723098457,597340985723098475)))
print(int(4872349085723098457*597340985723098475))
print(int(multiply(4908347590823749,97098709870985)))
print(int(4908347590823749*97098709870985))
I tried replacing 2*m2 with m and got incorrect results. I think this has to do with how the numbers are split but I'm not really sure what's going on.
This goes to the heart of how you split your numbers for the recursive calls.
If you choose to use an odd n then n//2 will be rounded down to the nearest whole number, meaning your second number will have a length of floor(n/2) and you would have to pad the first with the floor(n/2) zeros.
Since we use the same n for both numbers this applies to both. This means if you stick to the original odd n for the final step, you would be padding the first term with the original n zeros instead of the number of zeros that would result from the combination of the first padding plus the second padding (floor(n/2)*2)
You have used m2 as a float. It needs to be an integer.
def karat(x,y):
if len(str(x)) == 1 or len(str(y)) == 1:
return x*y
else:
m = max(len(str(x)),len(str(y)))
m2 = m // 2
a = x // 10**(m2)
b = x % 10**(m2)
c = y // 10**(m2)
d = y % 10**(m2)
z0 = karat(b,d)
z1 = karat((a+b),(c+d))
z2 = karat(a,c)
return (z2 * 10**(2*m2)) + ((z1 - z2 - z0) * 10**(m2)) + (z0)
Your code and logic is correct, there is just issue with your base case. Since according to the algo a,b,c,d are 2 digit numbers you should modify your base case and keep the length of x and y equal to 2 in the base case.
I think it is better if you used math.log10 function to calculate the number of digits instead of converting to string, something like this :
def number_of_digits(number):
"""
Used log10 to find no. of digits
"""
if number > 0:
return int(math.log10(number)) + 1
elif number == 0:
return 1
else:
return int(math.log10(-number)) + 1 # Don't count the '-'
The base case if len(str(x)) == 1 or len(str(y)) == 1: return x*y is incorrect. If you run either of the python code given in answers against large integers, the karat() function will not produce the correct answer.
To make the code correct, you need to change the base case to if len(str(x) < 3 or len(str(y)) < 3: return x*y.
Below is a modified implementation of Paul Panzer's answer that correctly multiplies large integers.
def karat(x,y):
if len(str(x)) < 3 or len(str(y)) < 3:
return x*y
n = max(len(str(x)),len(str(y))) // 2
a = x // 10**(n)
b = x % 10**(n)
c = y // 10**(n)
d = y % 10**(n)
z0 = karat(b,d)
z1 = karat((a+b), (c+d))
z2 = karat(a,c)
return ((10**(2*n))*z2)+((10**n)*(z1-z2-z0))+z0

How to generate random numbers in CX CAS calculator in python [duplicate]

I'm using Python for a competition in which I am creating a bot to play a game. The problem is, it does not have anything of c support installed, so I do not have access to the random, numpy, and scipy modules.
I will have roughly 400mb ram available, and I am looking for a way to produce uniform random numbers between 0 and 1 for simulation purposes during the game.
Note that I have used the clock time before to generate a single number, but the issue is that I will need loads of numbers without the clock changing much, which would result in constantly the same number. In fact, I am limited to a maximum of 1 second for, say, 100k numbers.
I'm considering loading in data, but the problem would then be that the bot would always use the same numbers. Then again, the circumstances for which I need to use the numbers vary slightly.
Using Python 2.7, hoping people have some suggestions.
FWIW, the random module contains the class Wichman-Hill generator written in pure python (no C required):
>>> import random
>>> rng = random.WichmannHill(8675309)
>>> rng.random()
0.06246664612856567
>>> rng.random()
0.3049888099198217
Here's the cleaned-up source code:
class WichmannHill(Random):
def seed(self, a=None):
a, x = divmod(a, 30268)
a, y = divmod(a, 30306)
a, z = divmod(a, 30322)
self._seed = int(x)+1, int(y)+1, int(z)+1
def random(self):
"""Get the next random number in the range [0.0, 1.0)."""
x, y, z = self._seed
x = (171 * x) % 30269
y = (172 * y) % 30307
z = (170 * z) % 30323
self._seed = x, y, z
return (x/30269.0 + y/30307.0 + z/30323.0) % 1.0
You can use a Mersenne Twister implementation. I found this one, which is modeled after the pseudocode on Wikipedia.
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Based on the pseudocode in https://en.wikipedia.org/wiki/Mersenne_Twister. Generates uniformly distributed 32-bit integers in the range [0, 232 − 1] with the MT19937 algorithm
Yaşar Arabacı <yasar11732 et gmail nokta com>
"""
# Create a length 624 list to store the state of the generator
MT = [0 for i in xrange(624)]
index = 0
# To get last 32 bits
bitmask_1 = (2 ** 32) - 1
# To get 32. bit
bitmask_2 = 2 ** 31
# To get last 31 bits
bitmask_3 = (2 ** 31) - 1
def initialize_generator(seed):
"Initialize the generator from a seed"
global MT
global bitmask_1
MT[0] = seed
for i in xrange(1,624):
MT[i] = ((1812433253 * MT[i-1]) ^ ((MT[i-1] >> 30) + i)) & bitmask_1
def extract_number():
"""
Extract a tempered pseudorandom number based on the index-th value,
calling generate_numbers() every 624 numbers
"""
global index
global MT
if index == 0:
generate_numbers()
y = MT[index]
y ^= y >> 11
y ^= (y << 7) & 2636928640
y ^= (y << 15) & 4022730752
y ^= y >> 18
index = (index + 1) % 624
return y
def generate_numbers():
"Generate an array of 624 untempered numbers"
global MT
for i in xrange(624):
y = (MT[i] & bitmask_2) + (MT[(i + 1 ) % 624] & bitmask_3)
MT[i] = MT[(i + 397) % 624] ^ (y >> 1)
if y % 2 != 0:
MT[i] ^= 2567483615
if __name__ == "__main__":
from datetime import datetime
now = datetime.now()
initialize_generator(now.microsecond)
for i in xrange(100):
"Print 100 random numbers as an example"
print extract_number()
If the script is ran on linux, try using /dev/urandom:
with open('/dev/urandom', 'rb') as f:
random_int = reduce(lambda acc, x: (acc << 8) | x, map(ord, f.read(4)), 0)
f.read(4) reads 4 bytes of entrophy
map(ord, f.read(4)) - converts byte-strings into numbers
reduce(lambda ..., map(...), 0) - converts the list of numbers into an integer
Maths is your best answer: http://en.m.wikipedia.org/wiki/Linear_congruential_generator
X(n+1) = (aX(n)+c) mod m
x2 = (a*x1+c)%m

Python: Streamlining Code for Brown Numbers

I was curious if any of you could come up with a more streamline version of code to calculate Brown numbers. as of the moment, this code can do ~650! before it moves to a crawl. Brown Numbers are calculated thought the equation n! + 1 = m**(2) Where M is an integer
brownNum = 8
import math
def squareNum(n):
x = n // 2
seen = set([x])
while x * x != n:
x = (x + (n // x)) // 2
if x in seen: return False
seen.add(x)
return True
while True:
for i in range(math.factorial(brownNum)+1,math.factorial(brownNum)+2):
if squareNum(i) is True:
print("pass")
print(brownNum)
print(math.factorial(brownNum)+1)
break
else:
print(brownNum)
print(math.factorial(brownNum)+1)
brownNum = brownNum + 1
continue
break
print(input(" "))
Sorry, I don't understand the logic behind your code.
I don't understand why you calculate math.factorial(brownNum) 4 times with the same value of brownNum each time through the while True loop. And in the for loop:
for i in range(math.factorial(brownNum)+1,math.factorial(brownNum)+2):
i will only take on the value of math.factorial(brownNum)+1
Anyway, here's my Python 3 code for a brute force search of Brown numbers. It quickly finds the only 3 known pairs, and then proceeds to test all the other numbers under 1000 in around 1.8 seconds on this 2GHz 32 bit machine. After that point you can see it slowing down (it hits 2000 around the 20 second mark) but it will chug along happily until the factorials get too large for your machine to hold.
I print progress information to stderr so that it can be separated from the Brown_number pair output. Also, stderr doesn't require flushing when you don't print a newline, unlike stdout (at least, it doesn't on Linux).
import sys
# Calculate the integer square root of `m` using Newton's method.
# Returns r: r**2 <= m < (r+1)**2
def int_sqrt(m):
if m <= 0:
return 0
n = m << 2
r = n >> (n.bit_length() // 2)
while True:
d = (n // r - r) >> 1
r += d
if -1 <= d <= 1:
break
return r >> 1
# Search for Browns numbers
fac = i = 1
while True:
if i % 100 == 0:
print('\r', i, file=sys.stderr, end='')
fac *= i
n = fac + 1
r = int_sqrt(n)
if r*r == n:
print('\nFound', i, r)
i += 1
You might want to:
pre calculate your square numbers, instead of testing for them on the fly
pre calculate your factorial for each loop iteration num_fac = math.factorial(brownNum) instead of multiple calls
implement your own, memoized, factorial
that should let you run to the hard limits of your machine
one optimization i would make would be to implement a 'wrapper' function around math.factorial that caches previous values of factorial so that as your brownNum increases, factorial doesn't have as much work to do. this is known as 'memoization' in computer science.
edit: found another SO answer with similar intention: Python: Is math.factorial memoized?
You should also initialize the square root more closely to the root.
e = int(math.log(n,4))
x = n//2**e
Because of 4**e <= n <= 4**(e+1) the square root will be between x/2 and x which should yield quadratic convergence of the Heron formula from the first iteration on.

Categories