This is my factorising code which is used to find all the factors of a number but after roughly 7 digits, the program begins to slow down.
so I was wondering if there is any method of optimising this program to allow it to factorise numbers faster.
number = int(input("Input the whole number here?\n"))
factors = [1]
def factorization():
global factors
for i in range(1 , number):
factor = (number/i)
try:
factorInt = int(number/i)
if factorInt == factor:
factors.append(factorInt)
except ValueError:
pass
factorization()
print(factors)
The most effective optimization is by noting that when the number has non trivial factors, and the smallest of them is smaller than the square root of the number, and there is no need to continue looping past this square root.
Indeed, let this smallest factor be m. We have n = m.p and the other factor is such that p >= m. But if m > √n, then m.p >= n, a contradiction.
Note that this optimization only speeds-up the processing of prime numbers (for the composite ones, the search stops before √n anyway). But the density of primes and the fact that n is much larger than √n make it abolutely worth.
Another optimization is by noting that the smallest divisor must be a prime, and you can use a stored table of primes. (There are less than 51 million primes below one billion.) The speedup will be less noticeable.
Let me offer a NumPy-based solution. It seems quite efficient:
import numpy as np
def factorize(number):
n = np.arange(2, np.sqrt(number), dtype=int)
n2 = number / n
low = n[n2.astype(int) == n2]
return np.concatenate((low, number // low,))
factorize(34976237696437)
#array([ 71, 155399, 3170053, 492623066147, 225073763, 11033329])])
# 176 msec
Related
I have solved Project Euler problem 12 and I tried to optimize my solution.
The part I am focusing on is the part of finding the number of divisors.
The first algorithm I created I thought was going to be slower than the second but it wasn't and I don't understand why?
First(regular count goes until n**0.5):
from math import sqrt
def get(n):
count = 0
limit = sqrt(n)
for i in range(1,int(limit)+1):
if n%i==0:
count+=2
if limit.is_integer():
return count-1
return count
Second(Prime factoring to get each the degree of each prime in order to use this fomula, I am using the form of primes as you can see here to calculate faster but its is still slower ).:
def Get_Devisors_Amount(n):#Prime factorization
if n <=1: return 1
dcount = 1
count = 0
while n%2==0:
count+=1
n//=2
dcount*=(count+1)
count = 0
while n%3==0:
count+=1
n//=3
dcount*=(count+1)
i = 1#count for the form of primes 6n+-1
while n!=1:
t = 6*i+1
count = 0
while n%t==0:
count+=1
n//=t
dcount*=(count+1)
t = 6*i-1
count = 0
while n%t==0:
count+=1
n//=t
if count!=0:
dcount*=(count+1)
i+=1
if dcount==1: return 2# n is a prime
return dcount
How I tested the time
import time
start = time.time()
for i in range(1,1000):
get(i)
print(time.time()-start)
start = time.time()
for i in range(1,1000):
Get_Devisors_Amount(i)
print(time.time()-start)
Output:
get: 0.00299835205078125
Get_Devisors_Amount: 0.009994029998779297
Although I am using property and a formula that I think should make the search time lower the first method is still faster. could you explain why to me?
In the first approach, you testing divisibility with each number from 1 to sqrt(x), so the complexity of testing a single number is sqrt(x). According to this formula, the sum of first n roots can be approximated to n*sqrt(n).
Time complexity of method 1: O(N*sqrt(N)) (N is the total count of numbers being tested).
In the second approach, there are 2 cases:
If a number isn't prime, all primes upto n are tested. Complexity - O(n/6) = O(n)
If a number is prime, we can approximate the complexity to be O(log(n)) (there might be a more accurate calculation of the complexity for this case, I'm making an approximation since this wouldn't matter in the proof)
For the prime numbers, using the fact that we test them with (n/6) primes, the complexity would become 5/6 + 7/6 + 11/6 + 13/6 + 17/6 ..... (last prime before n)/6. This can be reduced to (sum of all prime numbers till n)/6 for the time being. Now, the sum of all prime numbers upto N can be approximated as N^2/(2*logN). Thus the complexity for this step becomes N^2/(6*(2*logN)) = N^2/(12*lognN).
Time complexity of method 2: O(N^2/(12*lognN)) (N is the total count of numbers being tested).
(if you want, you can make more accurate bounds for the time complexities of each step. I have made a few approximations since it helps in proving the point without making any overoptimistic assumption).
Your first algorithm wisely only considers divisors up to sqrt(n).
But your second algorithm considers divisors all the way up to n, although admittedly if n has many factors, n will be reduced along the way.
If you fix this in your algorithm, by changing this:
t = 6*i-1
to this:
t = 6*i-1
if t*t > n:
return dcount * 2
Then your second algorithm will be faster.
(The * 2 is because the algorithm would eventually find the remaining prime factor (n itself) and then dcount *= (count + 1) would double dcount before returning it.)
I wrote two primality tests in python. First one is based on trial division, the second one applies sieve of Eratosthenes. My understanding is that sieve should have a smaller time complexity than trial, so sieve should be asymptotically faster.
However when I run it, trial division is much faster. For example, when n = 6*(10**11), is_prime(n) takes less than a second, but is_prime_sieve(n) practically never ends! Did I wrote the sieve wrong?
My code is:
# determines if prime using trial division
def is_prime(n):
d = {}
u = math.floor(math.sqrt(n))
i = 2
# trial division: works pretty well for determining 600 billion
while (i <= u):
if (n % i == 0):
return False
i += 1
return True
# primality test with sieve
def is_prime_sieve(n):
# first find all prime numbers from 2 to u
# then test them
u = math.floor(math.sqrt(n))
prime = {}
lst = range(2, int(u)+1)
for i in lst:
j = 2
prime[i] = True
while (i*j <= u):
prime[i*j] = False
j += 1
while (u >= 2):
if (u not in prime) or (prime[u]):
if (n % u == 0):
return False
u -= 1
return True
For the Sieve of Erastothenes you are recomputing the sieve every time. The sieve should be cached so that you only generate it once. It works well when you build the sieve once and then perform many primality checks; it is very inefficient if you only check a single number.
This means, by the way, that you need to anticipate the highest prime number and generate the sieve table up to that number.
When done right, is_prime_sieve becomes simply:
def is_prime_sieve(n):
return prime[n]
You would not need the while loop.
The sieve finds all primes from 1 to n. Calculating one sieve is an awful lot faster than doing trial division for each of these numbers. Obviously if you determine all primes from 1 to n, and then throw away all the information for the first n-1 numbers, that's very inefficient.
It's like comparing the speed of a bus and a two seater sports car. The bus is much much faster if you need to take fifty people from A to B. If you take a single passenger, guess what, the sports car is faster.
But even with the traditional method of building the sieve, there is still far too many transactions occurring.
I have developed a way of extracting prime numbers without division (except for data management purposes) adapting the basic sieve of Eratosthenes. I do not have to set any upper or lower bounds, the algorithm is completely open ended. I develop a data string from which I can go anywhere in the calculated range and pull up all the prime numbers in a subset range. I waste no calculations with division.
I know there's already a question similar to this, but I want to speed it up using GMPY2 (or something similar with GMP).
Here is my current code, it's decent but can it be better?
Edit: new code, checks divisors 2 and 3
def factors(n):
result = set()
result |= {mpz(1), mpz(n)}
def all_multiples(result, n, factor):
z = mpz(n)
while gmpy2.f_mod(mpz(z), factor) == 0:
z = gmpy2.divexact(z, factor)
result |= {mpz(factor), z}
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, gmpy2.isqrt(n) + 1, 6):
i1 = mpz(i) + 1
i2 = mpz(i) + 5
div1, mod1 = gmpy2.f_divmod(n, i1)
div2, mod2 = gmpy2.f_divmod(n, i2)
if mod1 == 0:
result |= {i1, div1}
if mod2 == 0:
result |= {i2, div2}
return result
If it's possible, I'm also interested in an implementation with divisors only within n^(1/3) and 2^(2/3)*n(1/3)
As an example, mathematica's factor() is much faster than the python code. I want to factor numbers between 20 and 50 decimal digits. I know ggnfs can factor these in less than 5 seconds.
I am interested if any module implementing fast factorization exists in python too.
I just made some quick changes to your code to eliminate redundant name lookups. The algorithm is still the same but it is about twice as fast on my computer.
import gmpy2
from gmpy2 import mpz
def factors(n):
result = set()
n = mpz(n)
for i in range(1, gmpy2.isqrt(n) + 1):
div, mod = divmod(n, i)
if not mod:
result |= {mpz(i), div}
return result
print(factors(12345678901234567))
Other suggestions will need more information about the size of the numbers, etc. For example, if you need all the possible factors, it may be faster to construct those from all the prime factors. That approach will let you decrease the limit of the range statement as you proceed and also will let you increment by 2 (after removing all the factors of 2).
Update 1
I've made some additional changes to your code. I don't think your all_multiplies() function is correct. Your range() statement isn't optimal since 2 is check again but my first fix made it worse.
The new code delays computing the co-factor until it knows the remainder is 0. I also tried to use the built-in functions as much as possible. For example, mpz % integer is faster than gmpy2.f_mod(mpz, integer) or gmpy2.f_mod(integer, mpz) where integer is a normal Python integer.
import gmpy2
from gmpy2 import mpz, isqrt
def factors(n):
n = mpz(n)
result = set()
result |= {mpz(1), n}
def all_multiples(result, n, factor):
z = n
f = mpz(factor)
while z % f == 0:
result |= {f, z // f}
f += factor
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, isqrt(n) + 1, 6):
i1 = i + 1
i2 = i + 5
if not n % i1:
result |= {mpz(i1), n // i1}
if not n % i2:
result |= {mpz(i2), n // i2}
return result
print(factors(12345678901234567))
I would change your program to just find all the prime factors less than the square root of n and then construct all the co-factors later. Then you decrease n each time you find a factor, check if n is prime, and only look for more factors if n isn't prime.
Update 2
The pyecm module should be able to factor the size numbers you are trying to factor. The following example completes in about a second.
>>> import pyecm
>>> list(pyecm.factors(12345678901234567890123456789012345678901, False, True, 10, 1))
[mpz(29), mpz(43), mpz(43), mpz(55202177), mpz(2928109491677), mpz(1424415039563189)]
There exist different Python factoring modules in the Internet. But if you want to implement factoring yourself (without using external libraries) then I can suggest quite fast and very easy to implement Pollard-Rho Algorithm. I implemented it fully in my code below, you just scroll down directly to my code (at the bottom of answer) if you don't want to read.
With great probability Pollard-Rho algorithm finds smallest non-trivial factor P (not equal to 1 or N) within time of O(Sqrt(P)). To compare, Trial Division algorithm that you implemented in your question takes O(P) time to find factor P. It means for example if a prime factor P = 1 000 003 then trial division will find it after 1 000 003 division operations, while Pollard-Rho on average will find it just after 1 000 operations (Sqrt(1 000 003) = 1 000), which is much much faster.
To make Pollard-Rho algorithm much faster we should be able to detect prime numbers, to exclude them from factoring and don't wait unnecessarily time, for that in my code I used Fermat Primality Test which is very fast and easy to implement within just 7-9 lines of code.
Pollard-Rho algorithm itself is very short, 13-15 lines of code, you can see it at the very bottom of my pollard_rho_factor() function, the remaining lines of code are supplementary helpers-functions.
I implemented all algorithms from scratch without using extra libraries (except random module). That's why you can see my gcd() function there although you can use built-in Python's math.gcd() instead (which finds Greatest Common Divisor).
You can see function Int() in my code, it is used just to convert Python's integers to GMPY2. GMPY2 ints will make algorithm faster, you can just use Python's int(x) instead. I didn't use any specific GMPY2 function, just converted all ints to GMPY2 ints to have around 50% speedup.
As an example I factor first 190 digits of Pi!!! It takes 3-15 seconds to factor them. Pollard-Rho algorithm is randomized hence it takes different time to factor same number on each run. You can restart program again and see that it will print different running time.
Of course factoring time depends greatly on size of prime divisors. Some 50-200 digits numbers can be factoring within fraction of second, some will take months. My example 190 digits of Pi has quite small prime factors, except largest one, that's why it is fast. Other digits of Pi may be not that fast to factor. So digit-size of number doesn't matter very much, only size of prime factors matter.
I intentionally implemented pollard_rho_factor() function as one standalone function, without breaking it into smaller separate functions. Although it breaks Python's style guide, which (as I remember) suggests not to have nested functions and place all possible functions at global scope. Also style guide suggests to do all imports at global scope in first lines of script. I did single function intentionally so that it is easy copy-pastable and fully ready to use in your code. Fermat primality test is_fermat_probable_prime() sub-function is also copy pastable and works without extra dependencies.
In very rare cases Pollard-Rho algorithm may fail to find non-trivial prime factor, especially for very small factors, for example you can replace n inside test() with small number 4 and see that Pollard-Rho fails. For such small failed factors you can easily use your Trial Division algorithm that you implemented in your question.
Try it online!
def pollard_rho_factor(N, *, trials = 16):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
def Int(x):
import gmpy2
return gmpy2.mpz(x) # int(x)
def is_fermat_probable_prime(n, *, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def gcd(a, b):
# https://en.wikipedia.org/wiki/Greatest_common_divisor
# https://en.wikipedia.org/wiki/Euclidean_algorithm
while b != 0:
a, b = b, a % b
return a
def found(f, prime):
print(f'Found {("composite", "prime")[prime]} factor, {math.log2(f):>7.03f} bits... {("Pollard-Rho failed to fully factor it!", "")[prime]}')
return f
N = Int(N)
if N <= 1:
return []
if is_fermat_probable_prime(N):
return [found(N, True)]
for j in range(trials):
i, stage, y, x = 0, 2, Int(1), Int(random.randint(1, N - 2))
while True:
r = gcd(N, abs(x - y))
if r != 1:
break
if i == stage:
y = x
stage <<= 1
x = (x * x + 1) % N
i += 1
if r != N:
return sorted(pollard_rho_factor(r) + pollard_rho_factor(N // r))
return [found(N, False)] # Pollard-Rho failed
def test():
import time
# http://www.math.com/tables/constants/pi.htm
# pi = 3.
# 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679
# 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196
# n = first 190 fractional digits of Pi
n = 1415926535_8979323846_2643383279_5028841971_6939937510_5820974944_5923078164_0628620899_8628034825_3421170679_8214808651_3282306647_0938446095_5058223172_5359408128_4811174502_8410270193_8521105559_6446229489
tb = time.time()
print('N:', n)
print('Factors:', pollard_rho_factor(n))
print(f'Time: {time.time() - tb:.03f} sec')
test()
Output:
N: 1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489
Found prime factor, 1.585 bits...
Found prime factor, 6.150 bits...
Found prime factor, 20.020 bits...
Found prime factor, 27.193 bits...
Found prime factor, 28.311 bits...
Found prime factor, 545.087 bits...
Factors: [mpz(3), mpz(71), mpz(1063541), mpz(153422959), mpz(332958319), mpz(122356390229851897378935483485536580757336676443481705501726535578690975860555141829117483263572548187951860901335596150415443615382488933330968669408906073630300473)]
Time: 2.963 sec
I'm relatively new to the python world, and the coding world in general, so I'm not really sure how to go about optimizing my python script. The script that I have is as follows:
import math
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
calculated = open('Prime_Numbers.txt', 'r')
readlines = calculated.readlines()
calculated.close()
a = len(readlines)
b = readlines[(a-1)]
b = int(b) + 1
for num in range(b, (b+1000)):
prime = True
calculated = open('Prime_Numbers.txt', 'r')
for i in calculated:
i = int(i)
q = math.ceil(num/2)
if (q%i==0):
prime = False
if prime:
calculated.close()
writeto = open('Prime_Numbers.txt', 'a')
num = str(num)
writeto.write("\n" + num)
writeto.close()
print(num)
As some of you can probably guess I'm calculating prime numbers. The external file that it calls on contains all the prime numbers between 2 and 20.
The reason that I've got the while loop in there is that I wanted to be able to control how long it ran for.
If you have any suggestions for cutting out any clutter in there could you please respond and let me know, thanks.
Reading and writing to files is very, very slow compared to operations with integers. Your algorithm can be sped up 100-fold by just ripping out all the file I/O:
import itertools
primes = {2} # A set containing only 2
for n in itertools.count(3): # Start counting from 3, by 1
for prime in primes: # For every prime less than n
if n % prime == 0: # If it divides n
break # Then n is composite
else:
primes.add(n) # Otherwise, it is prime
print(n)
A much faster prime-generating algorithm would be a sieve. Here's the Sieve of Eratosthenes, in Python 3:
end = int(input('Generate primes up to: '))
numbers = {n: True for n in range(2, end)} # Assume every number is prime, and then
for n, is_prime in numbers.items(): # (Python 3 only)
if not is_prime:
continue # For every prime number
for i in range(n ** 2, end, n): # Cross off its multiples
numbers[i] = False
print(n)
It is very inefficient to keep storing and loading all primes from a file. In general file access is very slow. Instead save the primes to a list or deque. For this initialize calculated = deque() and then simply add new primes with calculated.append(num). At the same time output your primes with print(num) and pipe the result to a file.
When you found out that num is not a prime, you do not have to keep checking all the other divisors. So break from the inner loop:
if q%i == 0:
prime = False
break
You do not need to go through all previous primes to check for a new prime. Since each non-prime needs to factorize into two integers, at least one of the factors has to be smaller or equal sqrt(num). So limit your search to these divisors.
Also the first part of your code irritates me.
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
This part seems to do the same as:
for x in range(500):
Also you limit with x to 500 primes, why don't you simply use a counter instead, that you increase if a prime is found and check for at the same time, breaking if the limit is reached? This would be more readable in my opinion.
In general you do not need to introduce a limit. You can simply abort the program at any point in time by hitting Ctrl+C.
However, as others already pointed out, your chosen algorithm will perform very poor for medium or large primes. There are more efficient algorithms to find prime numbers: https://en.wikipedia.org/wiki/Generating_primes, especially https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
You're writing a blank line to your file, which is making int() traceback. Also, I'm guessing you need to rstrip() off your newlines.
I'd suggest using two different files - one for initial values, and one for all values - initial and recently computed.
If you can keep your values in memory a while, that'd be a lot faster than going through a file repeatedly. But of course, this will limit the size of the primes you can compute, so for larger values you might return to the iterate-through-the-file method if you want.
For computing primes of modest size, a sieve is actually quite good, and worth a google.
When you get into larger primes, trial division by the first n primes is good, followed by m rounds of Miller-Rabin. If Miller-Rabin probabilistically indicates the number is probably a prime, then you do complete trial division or AKS or similar. Miller Rabin can say "This is probably a prime" or "this is definitely composite". AKS gives a definitive answer, but it's slower.
FWIW, I've got a bunch of prime-related code collected together at http://stromberg.dnsalias.org/~dstromberg/primes/
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ? # http://projecteuler.net/problem=3
I have a deal going with myself that if I can't solve a project Euler problem I will understand the best solution I can find. I did write an algorithm which worked for smaller numbers but was too inefficient to work for bigger ones. So I googled Zach Denton's answer and started studying it.
Here is his code:
#!/usr/bin/env python
import math
def factorize(n):
res = []
# iterate over all even numbers first.
while n % 2 == 0:
res.append(2)
n //= 2
# try odd numbers up to sqrt(n)
limit = math.sqrt(n+1)
i = 3
while i <= limit:
if n % i == 0:
res.append(i)
n //= i
limit = math.sqrt(n+i)
else:
i += 2
if n != 1:
res.append(n)
return res
print max(factorize(600851475143))
Here are the bits I can't figure out for myself:
In the second while loop, why does he use a sqrt(n + 1) instead of just sqrt(n)?
Why wouldn't you also use sqrt(n + 1) when iterating over the even numbers in the first while loop?
How does the algorithm manage to find only prime factors? In the algorithm I first wrote I had a separate test for checking whether a factor was prime, but he doesn't bother.
I suspect the +1 has to do with the imprecision of float (I am not sure whether it's actually required, or is simply a defensive move on the author's part).
The first while loop factors all twos out of n. I don't see how sqrt(n + 1) would fit in there.
If you work from small factor to large factors, you automatically eliminate all composite candidates. Think about it: once you've factored out 5, you've automatically factored out 10, 15, 20 etc. No need to check whether they're prime or not: by that point n will not be divisible by them.
I suspect that checking for primality is what's killing your original algorithm's performance.