I wrote two primality tests in python. First one is based on trial division, the second one applies sieve of Eratosthenes. My understanding is that sieve should have a smaller time complexity than trial, so sieve should be asymptotically faster.
However when I run it, trial division is much faster. For example, when n = 6*(10**11), is_prime(n) takes less than a second, but is_prime_sieve(n) practically never ends! Did I wrote the sieve wrong?
My code is:
# determines if prime using trial division
def is_prime(n):
d = {}
u = math.floor(math.sqrt(n))
i = 2
# trial division: works pretty well for determining 600 billion
while (i <= u):
if (n % i == 0):
return False
i += 1
return True
# primality test with sieve
def is_prime_sieve(n):
# first find all prime numbers from 2 to u
# then test them
u = math.floor(math.sqrt(n))
prime = {}
lst = range(2, int(u)+1)
for i in lst:
j = 2
prime[i] = True
while (i*j <= u):
prime[i*j] = False
j += 1
while (u >= 2):
if (u not in prime) or (prime[u]):
if (n % u == 0):
return False
u -= 1
return True
For the Sieve of Erastothenes you are recomputing the sieve every time. The sieve should be cached so that you only generate it once. It works well when you build the sieve once and then perform many primality checks; it is very inefficient if you only check a single number.
This means, by the way, that you need to anticipate the highest prime number and generate the sieve table up to that number.
When done right, is_prime_sieve becomes simply:
def is_prime_sieve(n):
return prime[n]
You would not need the while loop.
The sieve finds all primes from 1 to n. Calculating one sieve is an awful lot faster than doing trial division for each of these numbers. Obviously if you determine all primes from 1 to n, and then throw away all the information for the first n-1 numbers, that's very inefficient.
It's like comparing the speed of a bus and a two seater sports car. The bus is much much faster if you need to take fifty people from A to B. If you take a single passenger, guess what, the sports car is faster.
But even with the traditional method of building the sieve, there is still far too many transactions occurring.
I have developed a way of extracting prime numbers without division (except for data management purposes) adapting the basic sieve of Eratosthenes. I do not have to set any upper or lower bounds, the algorithm is completely open ended. I develop a data string from which I can go anywhere in the calculated range and pull up all the prime numbers in a subset range. I waste no calculations with division.
Related
Given this plain is_prime1 function which checks all the divisors from 1 to sqrt(p) with some bit-playing in order to avoid even numbers which are of-course not primes.
import time
def is_prime1(p):
if p & 1 == 0:
return False
# if the LSD is 5 then it is divisible by 5 (i.e. not a prime)
elif p % 10 == 5:
return False
for k in range(2, int(p ** 0.5) + 1):
if p % k == 0:
return False
return True
Versus this "optimized" version. The idea is to save all the primes we have found until a certain number p, then we iterate on the primes (using this basic arithmetic rule that every number is a product of primes) so we don't iterate through the numbers until sqrt(p) but over the primes we found which supposed to be a tiny bit compared to sqrt(p). We also iterate only on half the elements, because then the largest prime would most certainly won't "fit" in the number p.
import time
global mem
global lenMem
mem = [2]
lenMem = 1
def is_prime2(p):
global mem
global lenMem
# if p is even then the LSD is off
if p & 1 == 0:
return False
# if the LSD is 5 then it is divisible by 5 (i.e. not a prime)
elif p % 10 == 5:
return False
for div in mem[0: int(p ** 0.5) + 1]:
if p % div == 0:
return False
mem.append(p)
lenMem += 1
return True
The only idea I have in mind is that "global variables are expensive and time consuming" but I don't know if there is another way, and if there is, will it really help?
On average, when running this same program:
start = time.perf_counter()
for p in range(2, 100000):
print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
end = time.perf_counter()
I get that for is_prime1 the average time for checking the numbers 1-100K is ~0.99 seconds and so is_prime2 (maybe a difference of +0.01s on average, maybe as I said the usage of global variables ruin some performance?)
The difference is a combination of three things:
You're just not doing that much less work. Your test case includes testing a ton of small numbers, where the distinction between testing "all numbers from 2 to square root" and testing "all primes from 2 to square root" just isn't that much of a difference. Your "average case" is roughly the midpoint of the range, 50,000, square root of 223.6, which means testing 48 primes, or testing 222 numbers if the number is prime, but most numbers aren't prime, and most numbers have at least one small factor (proof left as exercise), so you short-circuit and don't actually test most of the numbers in either set (if there's a factor below 8, which applies to ~77% of all numbers, you've saved maybe two tests by limiting yourself to primes)
You're slicing mem every time, which is performed eagerly, and completely, even if you don't use all the values (and as noted, you almost never do for the non-primes). This isn't a huge cost, but then, you weren't getting huge savings from skipping non-primes, so it likely eats what little savings you got from the other optimization.
(You found this one, good show) Your slice of primes took a number of primes to test equal to the square root of number to test, not all primes less than the square root of the number to test. So you actually performed the same number of tests, just with different numbers (many of them primes larger than the square root that definitely don't need to be tested).
A side-note:
Your up-front tests aren't actually saving you much work; you redo both tests in the loop, so they're wasted effort when the number is prime (you test them both twice). And your test for divisibility by five is pointless; % 10 is no faster than % 5 (computers don't operate in base-10 anyway), and if not p % 5: is a slightly faster, more direct, and more complete (your test doesn't recognize multiples of 10, just multiples of 5 that aren't multiples of 10) way to test for divisibility.
The tests are also wrong, because they don't exclude the base case (they say 2 and 5 are not prime, because they're divisible by 2 and 5 respectively).
First of all, you should remove the print call, it is very time consuming.
You should just time your function, not the print function, so you could do it like this:
start = time.perf_counter()
for p in range(2, 100000):
## print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
is_prime1(p)
end = time.perf_counter()
print ("prime1", end-start)
start = time.perf_counter()
for p in range(2, 100000):
## print(f'{p} is a prime? {is_prime2(p)}') # change to is_prime1 or is_prime2
is_prime2(p)
end = time.perf_counter()
print ("prime2", end-start)
is_prime1 is still faster for me.
If you want to hold primes in global memory to accelerate multiple calls, you need to ensure that the primes list is properly populated even when the function is called with numbers in random order. The way is_prime2() stores and uses the primes assumes that, for example, it is called with 7 before being called with 343. If not, 343 will be treated as a prime because 7 is not yet in the primes list.
So the function must compute and store all primes up to √49 before it can respond to the is_prime(343) call.
In order to quickly build a primes list, the Sieve of Eratosthenes is one of the fastest method. But, since you don't know in advance how many primes you need, you can't allocate the sieve's bit flags in advance. What you can do is use a rolling window of the sieve to move forward by chunks (of let"s say 1000000 bits at a time). When a number beyond your maximum prime is requested, you just generate more primes chunk by chunk until you have enough to respond.
Also, since you're going to build a list of primes, you might as well make it a set and check if the requested number is in it to respond to the function call. This will require generating more primes than needed for divisions but, in the spirit of accelerating subsequent calls, that should not be an issue.
Here's an example of an isPrime() function that uses that approach:
primes = {3}
sieveMax = 3
sieveChunk = 1000000 # must be an even number
def isPrime(n):
if not n&1: return n==2
global primes,sieveMax, sieveChunk
while n>sieveMax:
base,sieveMax = sieveMax, sieveMax + sieveChunk
sieve = [True]* sieveChunk
for p in primes:
i = (p - base%p)%p
sieve[i::p]=[False]*len(sieve[i::p])
for i in range(0, sieveChunk,2):
if not sieve[i]: continue
p = i + base
primes.add(p)
sieve[i::p] = [False]*len(sieve[i::p])
return n in primes
On the first call to an unknown prime, it will perform slower than the divisions approach but as the prime list builds up, it will provide much better response time.
I have solved Project Euler problem 12 and I tried to optimize my solution.
The part I am focusing on is the part of finding the number of divisors.
The first algorithm I created I thought was going to be slower than the second but it wasn't and I don't understand why?
First(regular count goes until n**0.5):
from math import sqrt
def get(n):
count = 0
limit = sqrt(n)
for i in range(1,int(limit)+1):
if n%i==0:
count+=2
if limit.is_integer():
return count-1
return count
Second(Prime factoring to get each the degree of each prime in order to use this fomula, I am using the form of primes as you can see here to calculate faster but its is still slower ).:
def Get_Devisors_Amount(n):#Prime factorization
if n <=1: return 1
dcount = 1
count = 0
while n%2==0:
count+=1
n//=2
dcount*=(count+1)
count = 0
while n%3==0:
count+=1
n//=3
dcount*=(count+1)
i = 1#count for the form of primes 6n+-1
while n!=1:
t = 6*i+1
count = 0
while n%t==0:
count+=1
n//=t
dcount*=(count+1)
t = 6*i-1
count = 0
while n%t==0:
count+=1
n//=t
if count!=0:
dcount*=(count+1)
i+=1
if dcount==1: return 2# n is a prime
return dcount
How I tested the time
import time
start = time.time()
for i in range(1,1000):
get(i)
print(time.time()-start)
start = time.time()
for i in range(1,1000):
Get_Devisors_Amount(i)
print(time.time()-start)
Output:
get: 0.00299835205078125
Get_Devisors_Amount: 0.009994029998779297
Although I am using property and a formula that I think should make the search time lower the first method is still faster. could you explain why to me?
In the first approach, you testing divisibility with each number from 1 to sqrt(x), so the complexity of testing a single number is sqrt(x). According to this formula, the sum of first n roots can be approximated to n*sqrt(n).
Time complexity of method 1: O(N*sqrt(N)) (N is the total count of numbers being tested).
In the second approach, there are 2 cases:
If a number isn't prime, all primes upto n are tested. Complexity - O(n/6) = O(n)
If a number is prime, we can approximate the complexity to be O(log(n)) (there might be a more accurate calculation of the complexity for this case, I'm making an approximation since this wouldn't matter in the proof)
For the prime numbers, using the fact that we test them with (n/6) primes, the complexity would become 5/6 + 7/6 + 11/6 + 13/6 + 17/6 ..... (last prime before n)/6. This can be reduced to (sum of all prime numbers till n)/6 for the time being. Now, the sum of all prime numbers upto N can be approximated as N^2/(2*logN). Thus the complexity for this step becomes N^2/(6*(2*logN)) = N^2/(12*lognN).
Time complexity of method 2: O(N^2/(12*lognN)) (N is the total count of numbers being tested).
(if you want, you can make more accurate bounds for the time complexities of each step. I have made a few approximations since it helps in proving the point without making any overoptimistic assumption).
Your first algorithm wisely only considers divisors up to sqrt(n).
But your second algorithm considers divisors all the way up to n, although admittedly if n has many factors, n will be reduced along the way.
If you fix this in your algorithm, by changing this:
t = 6*i-1
to this:
t = 6*i-1
if t*t > n:
return dcount * 2
Then your second algorithm will be faster.
(The * 2 is because the algorithm would eventually find the remaining prime factor (n itself) and then dcount *= (count + 1) would double dcount before returning it.)
I'm relatively new to the python world, and the coding world in general, so I'm not really sure how to go about optimizing my python script. The script that I have is as follows:
import math
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
calculated = open('Prime_Numbers.txt', 'r')
readlines = calculated.readlines()
calculated.close()
a = len(readlines)
b = readlines[(a-1)]
b = int(b) + 1
for num in range(b, (b+1000)):
prime = True
calculated = open('Prime_Numbers.txt', 'r')
for i in calculated:
i = int(i)
q = math.ceil(num/2)
if (q%i==0):
prime = False
if prime:
calculated.close()
writeto = open('Prime_Numbers.txt', 'a')
num = str(num)
writeto.write("\n" + num)
writeto.close()
print(num)
As some of you can probably guess I'm calculating prime numbers. The external file that it calls on contains all the prime numbers between 2 and 20.
The reason that I've got the while loop in there is that I wanted to be able to control how long it ran for.
If you have any suggestions for cutting out any clutter in there could you please respond and let me know, thanks.
Reading and writing to files is very, very slow compared to operations with integers. Your algorithm can be sped up 100-fold by just ripping out all the file I/O:
import itertools
primes = {2} # A set containing only 2
for n in itertools.count(3): # Start counting from 3, by 1
for prime in primes: # For every prime less than n
if n % prime == 0: # If it divides n
break # Then n is composite
else:
primes.add(n) # Otherwise, it is prime
print(n)
A much faster prime-generating algorithm would be a sieve. Here's the Sieve of Eratosthenes, in Python 3:
end = int(input('Generate primes up to: '))
numbers = {n: True for n in range(2, end)} # Assume every number is prime, and then
for n, is_prime in numbers.items(): # (Python 3 only)
if not is_prime:
continue # For every prime number
for i in range(n ** 2, end, n): # Cross off its multiples
numbers[i] = False
print(n)
It is very inefficient to keep storing and loading all primes from a file. In general file access is very slow. Instead save the primes to a list or deque. For this initialize calculated = deque() and then simply add new primes with calculated.append(num). At the same time output your primes with print(num) and pipe the result to a file.
When you found out that num is not a prime, you do not have to keep checking all the other divisors. So break from the inner loop:
if q%i == 0:
prime = False
break
You do not need to go through all previous primes to check for a new prime. Since each non-prime needs to factorize into two integers, at least one of the factors has to be smaller or equal sqrt(num). So limit your search to these divisors.
Also the first part of your code irritates me.
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
This part seems to do the same as:
for x in range(500):
Also you limit with x to 500 primes, why don't you simply use a counter instead, that you increase if a prime is found and check for at the same time, breaking if the limit is reached? This would be more readable in my opinion.
In general you do not need to introduce a limit. You can simply abort the program at any point in time by hitting Ctrl+C.
However, as others already pointed out, your chosen algorithm will perform very poor for medium or large primes. There are more efficient algorithms to find prime numbers: https://en.wikipedia.org/wiki/Generating_primes, especially https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
You're writing a blank line to your file, which is making int() traceback. Also, I'm guessing you need to rstrip() off your newlines.
I'd suggest using two different files - one for initial values, and one for all values - initial and recently computed.
If you can keep your values in memory a while, that'd be a lot faster than going through a file repeatedly. But of course, this will limit the size of the primes you can compute, so for larger values you might return to the iterate-through-the-file method if you want.
For computing primes of modest size, a sieve is actually quite good, and worth a google.
When you get into larger primes, trial division by the first n primes is good, followed by m rounds of Miller-Rabin. If Miller-Rabin probabilistically indicates the number is probably a prime, then you do complete trial division or AKS or similar. Miller Rabin can say "This is probably a prime" or "this is definitely composite". AKS gives a definitive answer, but it's slower.
FWIW, I've got a bunch of prime-related code collected together at http://stromberg.dnsalias.org/~dstromberg/primes/
I am writing a code to find the largest prime factor of a very large number.
Problem 3 of Project Euler :
What is the largest prime factor of the number 600851475143 ?
I coded it in C...but the data type long long int is not sufficient enough to hold the value .
Now, I have rewritten the code in Python. How can I reduce the time taken for execution (as it is taking a considerable amount of time)?
def isprime(b):
x=2
while x<=b/2:
if(b%x)==0:
return 0
x+=1
return 1
def lpf(a):
x=2
i=2
while i<=a/2:
if a%i==0:
if isprime(i)==1:
if i>x:
x=i
print(x)
i+=1
print("final answer"+x)
z=600851475143
lpf(z)
There are many possible algorithmic speed ups. Some basic ones might be:
First, if you are only interested in the largest prime factor, you should check for them from the largest possible ones, not smallest. So instead of looping from 2 to a/2 try to check from a downto 2.
You could load the database of primes instead of using isprime function (there are dozens of such files in the net)
Also, only odd numbers can be primes (except for 2) so you can "jump" 2 values in each iteration
Your isprime checker could also be speededup, you do not have to look for divisiors up to b/2, it is enough to check to sqrt(b), which reduces complexity from O(n) to O(sqrt(n)) (assuming that modulo operation is constant time).
You could use the 128 int provided by GCC: http://gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html . This way, you can continue to use C and avoid having to optimize Python's speed. In addition, you can always add your own custom storage type to hold numbers bigger than long long in C.
I think you're checking too many numbers (incrementing by 1 and starting at 2 in each case). If you want to check is_prime by trial division, you need to divide by fewer numbers: only odd numbers to start (better yet, only primes). You can range over odd numbers in python the following way:
for x in range(3, some_limit, 2):
if some_number % x == 0:
etc.
In addition, once you have a list of primes, you should be able to run through that list backwards (because the question asks for highest prime factor) and test if any of those primes evenly divides into the number.
Lastly, people usually go up to the square-root of a number when checking trial division because anything past the square-root is not going to provide new information. Consider 100:
1 x 100
2 x 50
5 x 20
10 x 10
20 x 5
etc.
You can find all the important divisor information by just checking up to the square root of the number. This tip is useful both for testing primes and for testing where to start looking for a potential divisor for that huge number.
First off, your two while loops only need to go up to the sqrt(n) since you will have hit anything past that earlier (you then need to check a/i for primeness as well). In addition, if you find the lowest number that divides it, and the result of the division is prime, then you have found the largest.
First, correct your isprime function:
def isprime(b):
x=2
sqrtb = sqrt(b)
while x<=sqrtb:
if(b%x)==0:
return 0
x+=1
return 1
Then, your lpf:
def lpf(a):
x=2
i=2
sqrta = sqrt(a)
while i<=sqrt(a):
if a%i==0:
b = a//i # integer
if isprime(b):
return b
if isprime(i):
x=i
print(x)
i+=1
return x
Is this in any way an optimal solution for finding primes? I am not trying to add every optimization under the sun, but is the principal good?
def primesUpto(self, x):
primes = [2]
sieve = [2]
i = 3
while i <= x:
composite = False
j = 0
while j < len(sieve):
sieve[j] = sieve[j] - 1
if sieve[j] == 0:
composite = True
sieve[j] = primes[j]
j += 1
if not composite:
primes.append(i)
sieve.append(i*i-i)
i += 1
return primes
Hmm, very interesting. Your code is actual honest to goodness genuine sieve of Eratosthenes IMHO, counting its way along the ascending natural numbers by decrementing each counter that it sets up for each prime encountered, by 1 on each step.
And it is very inefficient. Tested on Ideone it runs at the same empirical order of growth ~ n^2.2 (at the tested range of few thousand primes produced) as the famously inefficient Turner's trial division sieve (in Haskell).
Why? Several reasons. First, no early bailout in your test: when you detect it's a composite, you continue processing the array of counters, sieve. You have to, because of the second reason: you count the difference by decrementing each counter by 1 on each step, with 0 representing your current position. This is the most faithful expression of the original sieve IMHO, and it is very inefficient: today our CPUs know how to add numbers in O(1) time (if those numbers belong to a certain range, 0 .. 2^32, or 0 .. 2^64, of course).
Moreover, our computers also have direct access memory now, and having calculated the far-off number we can mark it in a random access array. Which is the foundation of the efficiency of the sieve of Eratosthenes on modern computers - both the direct calculation, and the direct marking of multiples.
And third, perhaps the most immediate reason for inefficiency, is the premature handling of the multiples: when you encounter 5 as a prime, you add its first multiple (not yet encountered) i.e. 25, right away into the array of counters, sieve (i.e. the distance between the current point and the multiple, i*i-i). That is much too soon. The addition of 25 must be postponed until ... well, until we encounter 25 among the ascending natural numbers. Starting to handle the multiples of each prime prematurely (at p instead of p*p) leads to having way too many counters to maintain - O(n) of them (where n is the number of primes produced), instead of just O(π(sqrt(n log n))) = O(sqrt(n / log n)).
The postponement optimization when applied on a similar "counting" sieve in Haskell brought its empirical orders of growth from ~ n^2.3 .. 2.6 for n = 1000 .. 6000 primes down to just above ~ n^1.5 (with obviously enormous gains in speed). When counting was further replaced by direct addition, the resulting measured empirical orders of growth were ~ n^1.2 .. 1.3 in producing up to hlaf a million primes (although in all probability it would gain on ~ n^1.5 for bigger ranges).