I am trying to find a way to find the distribution of prime gaps for primes less than 100,000,000.
My method:
Step 1: Start with a TXT file "primes.txt" which has a list of primes (up to 10,000,000).
Step 2: Have the program read the file and then insert each number into a list, p1.
Step 3: Find the square root of the upper bound (10 times the upper bound of the primes in the TXT file, in this case, 100,000,000) and create another list, p2, with all the primes less than or equal to that square root, rounded up.
Step 4: Define an isPrime() method that checks whether the input is a prime number (N.B.: because I know that the numbers that will be checked are all less than 100,000,000, I only have to check whether the number is divisible by all the primes less than or equal to the square root of 100,000,000, which is 10,000)
Step 5: Add a list l which collects all the prime gaps, then iterate from 1 to 100,000,000, checking the primality of each number. If the number IS prime, then record the gap between it and the last prime number before it, and also write it to another document "primes2.txt".
Step 6: Output the list l.
The problem:
The program seems to take a LONG time to run. I have a feeling that the problem has to do with how I am managing the list due to its size (the Prime Number Theorem estimates about 620,420 elements in that list from "primes.txt"). Is there a way to reduce the runtime for this program by handling the list differently?
I've attached my code below.
import math
import sys
f = open("primes.txt","r")
p1 = []
for i in f:
p1.append(int(i))
f.close()
ml = 10000000
ms = math.sqrt(10*ml)
p2 = []
x1 = 0
while x1 < len(p1) and p1[x1] <= int(ms+0.5):
p2.append(p1[x1])
x1 += 1
def isPrime(n):
for i in p2:
if n%i == 0:
if n/i == 1:
return True
return False
return True
def main():
l = [0]*1001 #1,2,4,6,8,10,12,...,2000 (1, and then all evens up to 2000)
lastprime = -1
diff = 0
fileobject = open("primes2.txt",'w')
for i in xrange(1,10*ml):
if isPrime(i):
if i > 2:
diff = i - lastprime
if diff == 1:
l[0] += 1
else:
l[diff/2] += 1
lastprime = i
fileobject.write(str(i)+"\n")
if i%(ml/100) == 0:
print i/float(ml/10), "% complete"
fileobject.close()
print l
main()
EDIT: Made a change in how the program reads from the file
Tips:
For the first few X million, you can generate primes as fast as you can read them from a file, but you need to do it efficiently. See below. (Generating n up to 100 million on my recent MacBook Pro takes about 7 seconds in Python. Generating primes up to 100,000,000 takes 4 minutes. It will be faster in PyPy and way faster in C or Swift or with Python with Numpy)
You are careless with memory. Example: ps = f.read().split('\n') and then use that p1 = [int(i) for i in ps] while ps sits in memory unused. Wasteful. Use a loop that reads the file line-by-line so the memory can be more efficiently used. When you read line-by-line, the file does not sit idly in memory after conversion.
There are very good reasons that big primes are useful for cryptography; they take a long time to generate. Python is not the most efficient language to tackle this with.
Here is a sieve of Eratosthenes to try:
def sieve_of_e(n):
""" Returns a list of primes < n """
sieve = [True] * n
for i in xrange(3,int(n**0.5)+1,2):
if sieve[i]:
sieve[i*i::2*i]=[False]*((n-i*i-1)/(2*i)+1)
return [2] + [i for i in xrange(3,n,2) if sieve[i]]
Use the sieve of eratosthenes to upgrade the prime function. Here's a link:
Sieve of Eratosthenes - Finding Primes Python
Related
Given this plain is_prime1 function which checks all the divisors from 1 to sqrt(p) with some bit-playing in order to avoid even numbers which are of-course not primes.
import time
def is_prime1(p):
if p & 1 == 0:
return False
# if the LSD is 5 then it is divisible by 5 (i.e. not a prime)
elif p % 10 == 5:
return False
for k in range(2, int(p ** 0.5) + 1):
if p % k == 0:
return False
return True
Versus this "optimized" version. The idea is to save all the primes we have found until a certain number p, then we iterate on the primes (using this basic arithmetic rule that every number is a product of primes) so we don't iterate through the numbers until sqrt(p) but over the primes we found which supposed to be a tiny bit compared to sqrt(p). We also iterate only on half the elements, because then the largest prime would most certainly won't "fit" in the number p.
import time
global mem
global lenMem
mem = [2]
lenMem = 1
def is_prime2(p):
global mem
global lenMem
# if p is even then the LSD is off
if p & 1 == 0:
return False
# if the LSD is 5 then it is divisible by 5 (i.e. not a prime)
elif p % 10 == 5:
return False
for div in mem[0: int(p ** 0.5) + 1]:
if p % div == 0:
return False
mem.append(p)
lenMem += 1
return True
The only idea I have in mind is that "global variables are expensive and time consuming" but I don't know if there is another way, and if there is, will it really help?
On average, when running this same program:
start = time.perf_counter()
for p in range(2, 100000):
print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
end = time.perf_counter()
I get that for is_prime1 the average time for checking the numbers 1-100K is ~0.99 seconds and so is_prime2 (maybe a difference of +0.01s on average, maybe as I said the usage of global variables ruin some performance?)
The difference is a combination of three things:
You're just not doing that much less work. Your test case includes testing a ton of small numbers, where the distinction between testing "all numbers from 2 to square root" and testing "all primes from 2 to square root" just isn't that much of a difference. Your "average case" is roughly the midpoint of the range, 50,000, square root of 223.6, which means testing 48 primes, or testing 222 numbers if the number is prime, but most numbers aren't prime, and most numbers have at least one small factor (proof left as exercise), so you short-circuit and don't actually test most of the numbers in either set (if there's a factor below 8, which applies to ~77% of all numbers, you've saved maybe two tests by limiting yourself to primes)
You're slicing mem every time, which is performed eagerly, and completely, even if you don't use all the values (and as noted, you almost never do for the non-primes). This isn't a huge cost, but then, you weren't getting huge savings from skipping non-primes, so it likely eats what little savings you got from the other optimization.
(You found this one, good show) Your slice of primes took a number of primes to test equal to the square root of number to test, not all primes less than the square root of the number to test. So you actually performed the same number of tests, just with different numbers (many of them primes larger than the square root that definitely don't need to be tested).
A side-note:
Your up-front tests aren't actually saving you much work; you redo both tests in the loop, so they're wasted effort when the number is prime (you test them both twice). And your test for divisibility by five is pointless; % 10 is no faster than % 5 (computers don't operate in base-10 anyway), and if not p % 5: is a slightly faster, more direct, and more complete (your test doesn't recognize multiples of 10, just multiples of 5 that aren't multiples of 10) way to test for divisibility.
The tests are also wrong, because they don't exclude the base case (they say 2 and 5 are not prime, because they're divisible by 2 and 5 respectively).
First of all, you should remove the print call, it is very time consuming.
You should just time your function, not the print function, so you could do it like this:
start = time.perf_counter()
for p in range(2, 100000):
## print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
is_prime1(p)
end = time.perf_counter()
print ("prime1", end-start)
start = time.perf_counter()
for p in range(2, 100000):
## print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
is_prime2(p)
end = time.perf_counter()
print ("prime2", end-start)
is_prime1 is still faster for me.
If you want to hold primes in global memory to accelerate multiple calls, you need to ensure that the primes list is properly populated even when the function is called with numbers in random order. The way is_prime2() stores and uses the primes assumes that, for example, it is called with 7 before being called with 343. If not, 343 will be treated as a prime because 7 is not yet in the primes list.
So the function must compute and store all primes up to √49 before it can respond to the is_prime(343) call.
In order to quickly build a primes list, the Sieve of Eratosthenes is one of the fastest method. But, since you don't know in advance how many primes you need, you can't allocate the sieve's bit flags in advance. What you can do is use a rolling window of the sieve to move forward by chunks (of let"s say 1000000 bits at a time). When a number beyond your maximum prime is requested, you just generate more primes chunk by chunk until you have enough to respond.
Also, since you're going to build a list of primes, you might as well make it a set and check if the requested number is in it to respond to the function call. This will require generating more primes than needed for divisions but, in the spirit of accelerating subsequent calls, that should not be an issue.
Here's an example of an isPrime() function that uses that approach:
primes = {3}
sieveMax = 3
sieveChunk = 1000000 # must be an even number
def isPrime(n):
if not n&1: return n==2
global primes,sieveMax, sieveChunk
while n>sieveMax:
base,sieveMax = sieveMax, sieveMax + sieveChunk
sieve = [True]* sieveChunk
for p in primes:
i = (p - base%p)%p
sieve[i::p]=[False]*len(sieve[i::p])
for i in range(0, sieveChunk,2):
if not sieve[i]: continue
p = i + base
primes.add(p)
sieve[i::p] = [False]*len(sieve[i::p])
return n in primes
On the first call to an unknown prime, it will perform slower than the divisions approach but as the prime list builds up, it will provide much better response time.
I've come back to programming after a long haitus so please forgive any stupid errors/inefficient code.
I am creating an encryption program that uses the RSA method of encryption which involves finding the coprimes of numbers to generate a key. I am using the Euclidean algorithm to generate highest common factors and then add the coprime to the list if HCF == 1. I generate two lists of coprimes for different numbers then compare to find coprimes in both sets. The basic code is below:
def gcd(a, b):
while b:
a,b=b,a%b
return a
def coprimes(n):
cp = []
for i in range(1,n):
if gcd(i, n) == 1:
cp.append(i)
print(cp)
def compare(n,m):
a = coprimes(n)
b = coprimes(m)
c = []
for i in a:
if i in b:
c.append(i)
print(c)
This code works perfectly for small numbers and gives me what I want but execution takes forever and is finally Killed when comupting for extremely large numbers in the billions range, which is necessary for even a moderate level of security.
I assume this is a memory issue but I cant work out how to do this in a non memory intensive way. I tried multiprocessing but that just made my computer unusable due to the amount of processes running.
How can I calculate the coprimes of large numbers and then compare two sets of coprimes in an efficent and workable way?
If the only thing you're worried about is running out of memory here you could use generators.
def coprimes(n):
for i in range(1,n):
if gcd(i, n) == 1:
yield i
This way you can use the coprime value then discard it once you don't need it. However, nothing is going to change the fact your code is O(N^2) and will always perform slow for large primes. And this assumes Euclid's algorithm is constant time, which it is not.
You could change the strategy and approach this from the perspective of common prime factors. The common coprimes between n and m will be all numbers that are not divisible by any of their common prime factors.
def primeFactors(N):
p = 2
while p*p<=N:
count = 0
while N%p == 0:
count += 1
N //= p
if count: yield p
p += 1 + (p&1)
if N>1: yield N
import math
def compare2(n,m):
# skip list for multiples of common prime factors
skip = { p:p for p in primeFactors(math.gcd(n,m)) }
for x in range(1,min(m,n)):
if x in skip:
p = skip[x] # skip multiple of common prime
m = x + p # determine next multiple to skip
while m in skip: m += p # for that prime
skip[m] = p
else:
yield x # comon coprime of n and m
The performance is considerably better than matching lists of coprimes, especially on larger numbers:
from timeit import timeit
timeit(lambda:list(compare2(10**5,2*10**5)),number=1)
# 0.025 second
timeit(lambda:list(compare2(10**6,2*10**6)),number=1)
# 0.196 second
timeit(lambda:list(compare2(10**7,2*10**7)),number=1)
# 2.18 seconds
timeit(lambda:list(compare2(10**8,2*10**8)),number=1)
# 20.3 seconds
timeit(lambda:list(compare2(10**9,2*10**9)),number=1)
# 504 seconds
At some point, building lists of all the coprimes becomes a bottleneck and you should just use/process them as they come out of the generator (for example to count how many there are):
timeit(lambda:sum(1 for _ in compare2(10**9,2*10**9)),number=1)
# 341 seconds
Another way to approach this, which is somewhat slower than the prime factor approach but much simpler to code, would be to list coprimes of the gcd between n and m:
import math
def compare3(n,m):
d = math.gcd(n,m)
for c in range(1,min(n,m)):
if math.gcd(c,d) == 1:
yield c
timeit(lambda:list(compare3(10**6,2*10**6)),number=1)
# 0.28 second
timeit(lambda:list(compare3(10**7,2*10**7)),number=1)
# 2.84 seconds
timeit(lambda:list(compare3(10**8,2*10**8)),number=1)
# 30.8 seconds
Given that it uses no memory resource, it could be advantageous in some cases:
timeit(lambda:sum(1 for _ in compare3(10**9,2*10**9)),number=1)
# 326 seconds
What I'm trying to figure out is when I run this code for smaller numbers it returns the list just fine, but for larger numbers (I would call this small in the context of what I'm working on.) like 29996299, it will run for a long time, I've waited for 45 minutes with no results and had to end up killing the program. What I was wondering was whether there was a more efficient way to handle numbers whose scale was larger than 4 or 5 digits. I've tested a few permutations of the range function to see if there was a better way to handle the limits of the list I want to produce but nothing seems to have any effect on the amount of time it takes to do the computation. I'm new to python and am not that experienced as a programmer. Thank you for your time.
ran the program again before submitting this post and it took an hour and a half or so.
function of the program is to take the User selected number, use it to generate a lower bound, find all primes between the bound and input and append to list, then generate a secound upper bound and find all primes and then append to list, to create a list that extends forwards and backwards from the initial number.
the program works like I expect it to but not as quickly as I need it to since the numbers I'm going to be dealing with are going to get large quickly, almost doubling at each phase.
initial_num = input("Please enter a number. ")
lower_1 = int(initial_num) - 1000
upper_1 = int(initial_num)
list_1 = []
for num in range(lower_1,upper_1):
if num > 1:
for i in range(2,num):
if (num % i) == 0:
break
else:
list_1.append(num)
lower_2 = list_1[-1]
upper_2 = list_1[-1] + 2000
list_2 = []
for num in range(lower_2,upper_2 +1):
if num > 1:
for i in range(2,num):
if (num % i) == 0:
break
else:
list_2.append(num)
list_3 = list_1 + list_2[1:]
print list_3
You can use a more efficient algorithm to generate the entire list of prime numbers up to N. This is the Sieve of Erathostenes. Please have a look at the linked article, it even includes an example pseudocode. The basic idea of the algorithm is:
maintain L, a list of potentially prime numbers (initially all numbers from 2 to N)
pick the next prime number (p) as the first element of L (intially 2)
remove all numbers that are a multiple of p, up to N, since they cannot be prime
repeat from step 2
At the end you are left with a list of prime numbers.
An implementation in Pyhton from here
def eratosthenes2(n):
multiples = set()
for i in range(2, n+1):
if i not in multiples:
yield i
multiples.update(range(i*i, n+1, i))
print(list(eratosthenes2(100)))
To reduce memory consumpution you could consider usgin a bitset, storing one bit for each number. That should reduce memory usage by between 32 - 64 times. A bitset implementation is available for python here.
I'm relatively new to the python world, and the coding world in general, so I'm not really sure how to go about optimizing my python script. The script that I have is as follows:
import math
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
calculated = open('Prime_Numbers.txt', 'r')
readlines = calculated.readlines()
calculated.close()
a = len(readlines)
b = readlines[(a-1)]
b = int(b) + 1
for num in range(b, (b+1000)):
prime = True
calculated = open('Prime_Numbers.txt', 'r')
for i in calculated:
i = int(i)
q = math.ceil(num/2)
if (q%i==0):
prime = False
if prime:
calculated.close()
writeto = open('Prime_Numbers.txt', 'a')
num = str(num)
writeto.write("\n" + num)
writeto.close()
print(num)
As some of you can probably guess I'm calculating prime numbers. The external file that it calls on contains all the prime numbers between 2 and 20.
The reason that I've got the while loop in there is that I wanted to be able to control how long it ran for.
If you have any suggestions for cutting out any clutter in there could you please respond and let me know, thanks.
Reading and writing to files is very, very slow compared to operations with integers. Your algorithm can be sped up 100-fold by just ripping out all the file I/O:
import itertools
primes = {2} # A set containing only 2
for n in itertools.count(3): # Start counting from 3, by 1
for prime in primes: # For every prime less than n
if n % prime == 0: # If it divides n
break # Then n is composite
else:
primes.add(n) # Otherwise, it is prime
print(n)
A much faster prime-generating algorithm would be a sieve. Here's the Sieve of Eratosthenes, in Python 3:
end = int(input('Generate primes up to: '))
numbers = {n: True for n in range(2, end)} # Assume every number is prime, and then
for n, is_prime in numbers.items(): # (Python 3 only)
if not is_prime:
continue # For every prime number
for i in range(n ** 2, end, n): # Cross off its multiples
numbers[i] = False
print(n)
It is very inefficient to keep storing and loading all primes from a file. In general file access is very slow. Instead save the primes to a list or deque. For this initialize calculated = deque() and then simply add new primes with calculated.append(num). At the same time output your primes with print(num) and pipe the result to a file.
When you found out that num is not a prime, you do not have to keep checking all the other divisors. So break from the inner loop:
if q%i == 0:
prime = False
break
You do not need to go through all previous primes to check for a new prime. Since each non-prime needs to factorize into two integers, at least one of the factors has to be smaller or equal sqrt(num). So limit your search to these divisors.
Also the first part of your code irritates me.
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
This part seems to do the same as:
for x in range(500):
Also you limit with x to 500 primes, why don't you simply use a counter instead, that you increase if a prime is found and check for at the same time, breaking if the limit is reached? This would be more readable in my opinion.
In general you do not need to introduce a limit. You can simply abort the program at any point in time by hitting Ctrl+C.
However, as others already pointed out, your chosen algorithm will perform very poor for medium or large primes. There are more efficient algorithms to find prime numbers: https://en.wikipedia.org/wiki/Generating_primes, especially https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
You're writing a blank line to your file, which is making int() traceback. Also, I'm guessing you need to rstrip() off your newlines.
I'd suggest using two different files - one for initial values, and one for all values - initial and recently computed.
If you can keep your values in memory a while, that'd be a lot faster than going through a file repeatedly. But of course, this will limit the size of the primes you can compute, so for larger values you might return to the iterate-through-the-file method if you want.
For computing primes of modest size, a sieve is actually quite good, and worth a google.
When you get into larger primes, trial division by the first n primes is good, followed by m rounds of Miller-Rabin. If Miller-Rabin probabilistically indicates the number is probably a prime, then you do complete trial division or AKS or similar. Miller Rabin can say "This is probably a prime" or "this is definitely composite". AKS gives a definitive answer, but it's slower.
FWIW, I've got a bunch of prime-related code collected together at http://stromberg.dnsalias.org/~dstromberg/primes/
This is one of the problems on Project Euler:
If we calculate a^2 mod 6 for 0 <= a <= 5 we get: 0, 1, 4, 3, 4, 1.
The largest value of "a" such that a^2 mod 6 = a is 4.
Let's call M(n) the largest value of a < n such that a^2 mod n = a.
So M(6) = 4.
Find M(n) for 1 <=n <=10^7.
So far, this is what I have:
import time
start = time.time()
from math import sqrt
squares=[]
for numba in xrange(0,10000001/2+2):
squares.append(numba*numba)
def primes1(n):
""" Returns a list of primes < n """
sieve = [True] * (n/2)
for i in xrange(3,int(sqrt(n))+1,2):
if sieve[i/2]:
sieve[i*i/2::i] = [False] * ((n-i*i-1)/(2*i)+1)
return [2] + [2*i+1 for i in xrange(1,n/2) if sieve[i]]
tot=0
gor = primes1(10000001)
def factor1(n):
'''Returns whether a number has more than 1 prime factor'''
boo = False
'''if n in gor:
return True'''
for e in xrange(0,len(gor)):
z=gor[e]
if n%z==0:
if boo:
return False
boo = True
elif z*2>n:
break
return True
for n in xrange(2,10000001):
if factor1(n):
tot+=1
else:
for a in xrange(int(sqrt(n))+1,n/2+1):
if squares[a]%n==a:
tot+=n+1-a
break
print tot
print time.time()-start
I've tried this code for smaller cases and it works perfectly; however, it is way too slow to do 10^7 cases.
Currently, for n being less than 20000, it runs in about 8 seconds.
When n is less than 90000, it runs in about 150 seconds.
As far as I can tell, for n is less than 10^7, it will run for many hours if not days.
I'm already using the sieve to generate prime numbers so that part is as fast as it can be, is there anything I can do to speed up the rest of the code?
I've already tried using different compiler like psyco, pypy, and shedskin. Psyco provides a minimal increase, shedskin speeds it up about 7 times but creates errors when large numbers occur, pypy speeds it up the most (about 20-30x the speed). But even then, it's still not fast enough for the amount of cases it has to go through.
Edit:
I added
squares=[]
for numba in xrange(0,10000001/2+2):
squares.append(numba*numba)
This pre-generates all the squares of a before-hand so that I don't have to keep generating the same ones over and over again. Program became slightly faster but still not enough
This might depend on the size of N because of memory usage, but in smaller tests I found something of an improvement by precalculating the factor counts. So something like this:
factors = [0]*N
for z in gor:
for n in xrange(1,N):
m = z*n
if m >= N: break
factors[m] += 1
where N is 10000001, or whatever counter you're using.
Then instead of if factor1(n) you do if factors[n] < 2.