I'm new to both Python and StackOverflow so I apologise if this question has been repeated too much or if it's not a good question. I'm doing a beginner's Python course and one of the tasks I have to do is to make a function that finds the next prime number after a given input. This is what I have so far:
def nextPrime(n):
num = n + 1
for i in range(1, 500):
for j in range(2, num):
if num%j == 0:
num = num + 1
return num
When I run it on the site's IDE, it's fine and everything works well but then when I submit the task, it says the runtime was too long and that I should optimise my code. But I'm not really sure how to do this, so would it be possible to get some feedback or any suggestions on how to make it run faster?
When your function finds the answer, it will continue checking the same number hundreds of times. This is why it is taking so long. Also, when you increase num, you should break out of the nested loop to that the new number is checked against the small factors first (which is more likely to eliminate it and would accelerate progress).
To make this simpler and more efficient, you should break down your problem in areas of concern. Checking if a number is prime or not should be implemented in its own separate function. This will make the code of your nextPrime() function much simpler:
def nextPrime(n):
n += 1
while not isPrime(n): n += 1
return n
Now you only need to implement an efficient isPrime() function:
def isPrime(x):
p,inc = 2,1
while p*p <= x:
if x % p == 0: return False
p,inc = p+inc,2
return x > 1
Looping from 1 to 500, especially because another loop runs through it, is not only inefficient, but also confines the range of the possible "next prime number" that you're trying to find. Therefore, you should make use of while loop and break which can be used to break out of the loop whenever you have found the prime number (of course, if it's stated that the number is less than 501 in the prompt, your approach totally makes sense).
Furthermore, you can make use of the fact that you only need check the integers less than or equal to the square root of the designated integer (which in python, is represented as num**0.5) to determine if that integer is prime, as the divisors of the integers always come in pair and the largest of the smaller divisor is always a square root, if it exists.
Related
After some trial and error I have found a solution which works very quickly for the Project Euler Problem 5. (I have found another way which correctly solved the example case (numbers 1-10) but took an eternity to solve the actual Problem.) Here it goes:
def test(n):
for x in range(2,21):
if n % x != 0:
return False
return True
def thwart(n):
for x in range(2,21):
if test(n/x):
n /= x
return n
raise TypeError
num = 1
for x in range(1,21):
num *= x
while True:
try:
num = thwart(num)
except TypeError:
break
print(num)
My main problem is understanding why calling thwart(num) repeatedly is enough to result in the correct solution. (I.e. why is it able to find the SMALLEST number and doesnt just spit out any number divisible by the numbers 1-20?)
I only had some vague thoughts when programming it and was surprised at how quickly it worked. But now I have trouble figuring out why exactly it even works... The optimized solutions of other people on SO Ive found so far were all talking about prime factors which I can't see how that would fit with my program...?
Any help is appreciated! Thanks!
Well this isn't really a coding issue but a mathematical issue. If you look at all the numbers from 1-20 as the prime sthat make them you'll get the following:
1, 2,3,2^2,5,2^3,7,2^3....2^2*5.
the interesting part here is that once you multiply by the highest exponent of every single factor in these numbers you will get a number that can be divided by each of the numbers between one and twenty.
Once you realize that the problem is a simple mathematical one and approach it as such you can use this basic code:
import math
primes = [2]
for n in range(3,21): #get primes between 1 and 20
for i in primes:
if n in primes:
break
if n%i == 0:
break
if i> math.sqrt(n):
primes.append(n)
break
s = 1
for i in primes:
for j in range(10): # no reason for 10, could as well be 5 because 2^5 >20
if i**j > 20:
s = s*(i**(j-1))
break
print s
Additionally, the hint that the number 2520 is the smallest number that can be divided by all numbers should make you understand how 2520 is chosen:
I have taken a photo for you:
As you can caculate, when you take the biggest exponents and multiply them you get the number 2520.
What your solution does
your solution basically takes the number which is 1*2*3*4..*20 and tries dividing it by every number between 2 to 20 in such a way that it will still remain relevant. By running it over and over you remove the un-needed numbers from it. early on it will remove all the unnecessary 2's by dividing by 2, returning the number and then being called again and divided by 2 again. Once all the two's have been eliminated it will eliminate all the threes, once all the unnecessary threes will be eliminated it will try dividing by 4 and it will se it wont work, continue to 5, 6, 7... and when it finishes the loop without being able to divide it will raise a TypeError and you will finish your program with the correct number. This is not an efficient way to solve this problem but it will work with small numbers.
What I'm trying to figure out is when I run this code for smaller numbers it returns the list just fine, but for larger numbers (I would call this small in the context of what I'm working on.) like 29996299, it will run for a long time, I've waited for 45 minutes with no results and had to end up killing the program. What I was wondering was whether there was a more efficient way to handle numbers whose scale was larger than 4 or 5 digits. I've tested a few permutations of the range function to see if there was a better way to handle the limits of the list I want to produce but nothing seems to have any effect on the amount of time it takes to do the computation. I'm new to python and am not that experienced as a programmer. Thank you for your time.
ran the program again before submitting this post and it took an hour and a half or so.
function of the program is to take the User selected number, use it to generate a lower bound, find all primes between the bound and input and append to list, then generate a secound upper bound and find all primes and then append to list, to create a list that extends forwards and backwards from the initial number.
the program works like I expect it to but not as quickly as I need it to since the numbers I'm going to be dealing with are going to get large quickly, almost doubling at each phase.
initial_num = input("Please enter a number. ")
lower_1 = int(initial_num) - 1000
upper_1 = int(initial_num)
list_1 = []
for num in range(lower_1,upper_1):
if num > 1:
for i in range(2,num):
if (num % i) == 0:
break
else:
list_1.append(num)
lower_2 = list_1[-1]
upper_2 = list_1[-1] + 2000
list_2 = []
for num in range(lower_2,upper_2 +1):
if num > 1:
for i in range(2,num):
if (num % i) == 0:
break
else:
list_2.append(num)
list_3 = list_1 + list_2[1:]
print list_3
You can use a more efficient algorithm to generate the entire list of prime numbers up to N. This is the Sieve of Erathostenes. Please have a look at the linked article, it even includes an example pseudocode. The basic idea of the algorithm is:
maintain L, a list of potentially prime numbers (initially all numbers from 2 to N)
pick the next prime number (p) as the first element of L (intially 2)
remove all numbers that are a multiple of p, up to N, since they cannot be prime
repeat from step 2
At the end you are left with a list of prime numbers.
An implementation in Pyhton from here
def eratosthenes2(n):
multiples = set()
for i in range(2, n+1):
if i not in multiples:
yield i
multiples.update(range(i*i, n+1, i))
print(list(eratosthenes2(100)))
To reduce memory consumpution you could consider usgin a bitset, storing one bit for each number. That should reduce memory usage by between 32 - 64 times. A bitset implementation is available for python here.
I'm relatively new to the python world, and the coding world in general, so I'm not really sure how to go about optimizing my python script. The script that I have is as follows:
import math
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
calculated = open('Prime_Numbers.txt', 'r')
readlines = calculated.readlines()
calculated.close()
a = len(readlines)
b = readlines[(a-1)]
b = int(b) + 1
for num in range(b, (b+1000)):
prime = True
calculated = open('Prime_Numbers.txt', 'r')
for i in calculated:
i = int(i)
q = math.ceil(num/2)
if (q%i==0):
prime = False
if prime:
calculated.close()
writeto = open('Prime_Numbers.txt', 'a')
num = str(num)
writeto.write("\n" + num)
writeto.close()
print(num)
As some of you can probably guess I'm calculating prime numbers. The external file that it calls on contains all the prime numbers between 2 and 20.
The reason that I've got the while loop in there is that I wanted to be able to control how long it ran for.
If you have any suggestions for cutting out any clutter in there could you please respond and let me know, thanks.
Reading and writing to files is very, very slow compared to operations with integers. Your algorithm can be sped up 100-fold by just ripping out all the file I/O:
import itertools
primes = {2} # A set containing only 2
for n in itertools.count(3): # Start counting from 3, by 1
for prime in primes: # For every prime less than n
if n % prime == 0: # If it divides n
break # Then n is composite
else:
primes.add(n) # Otherwise, it is prime
print(n)
A much faster prime-generating algorithm would be a sieve. Here's the Sieve of Eratosthenes, in Python 3:
end = int(input('Generate primes up to: '))
numbers = {n: True for n in range(2, end)} # Assume every number is prime, and then
for n, is_prime in numbers.items(): # (Python 3 only)
if not is_prime:
continue # For every prime number
for i in range(n ** 2, end, n): # Cross off its multiples
numbers[i] = False
print(n)
It is very inefficient to keep storing and loading all primes from a file. In general file access is very slow. Instead save the primes to a list or deque. For this initialize calculated = deque() and then simply add new primes with calculated.append(num). At the same time output your primes with print(num) and pipe the result to a file.
When you found out that num is not a prime, you do not have to keep checking all the other divisors. So break from the inner loop:
if q%i == 0:
prime = False
break
You do not need to go through all previous primes to check for a new prime. Since each non-prime needs to factorize into two integers, at least one of the factors has to be smaller or equal sqrt(num). So limit your search to these divisors.
Also the first part of your code irritates me.
z = 1
x = 0
while z != 0:
x = x+1
if x == 500:
z = 0
This part seems to do the same as:
for x in range(500):
Also you limit with x to 500 primes, why don't you simply use a counter instead, that you increase if a prime is found and check for at the same time, breaking if the limit is reached? This would be more readable in my opinion.
In general you do not need to introduce a limit. You can simply abort the program at any point in time by hitting Ctrl+C.
However, as others already pointed out, your chosen algorithm will perform very poor for medium or large primes. There are more efficient algorithms to find prime numbers: https://en.wikipedia.org/wiki/Generating_primes, especially https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes.
You're writing a blank line to your file, which is making int() traceback. Also, I'm guessing you need to rstrip() off your newlines.
I'd suggest using two different files - one for initial values, and one for all values - initial and recently computed.
If you can keep your values in memory a while, that'd be a lot faster than going through a file repeatedly. But of course, this will limit the size of the primes you can compute, so for larger values you might return to the iterate-through-the-file method if you want.
For computing primes of modest size, a sieve is actually quite good, and worth a google.
When you get into larger primes, trial division by the first n primes is good, followed by m rounds of Miller-Rabin. If Miller-Rabin probabilistically indicates the number is probably a prime, then you do complete trial division or AKS or similar. Miller Rabin can say "This is probably a prime" or "this is definitely composite". AKS gives a definitive answer, but it's slower.
FWIW, I've got a bunch of prime-related code collected together at http://stromberg.dnsalias.org/~dstromberg/primes/
The prime factors of 13195 are 5, 7, 13 and 29.
What is the largest prime factor of the number 600851475143 ? # http://projecteuler.net/problem=3
I have a deal going with myself that if I can't solve a project Euler problem I will understand the best solution I can find. I did write an algorithm which worked for smaller numbers but was too inefficient to work for bigger ones. So I googled Zach Denton's answer and started studying it.
Here is his code:
#!/usr/bin/env python
import math
def factorize(n):
res = []
# iterate over all even numbers first.
while n % 2 == 0:
res.append(2)
n //= 2
# try odd numbers up to sqrt(n)
limit = math.sqrt(n+1)
i = 3
while i <= limit:
if n % i == 0:
res.append(i)
n //= i
limit = math.sqrt(n+i)
else:
i += 2
if n != 1:
res.append(n)
return res
print max(factorize(600851475143))
Here are the bits I can't figure out for myself:
In the second while loop, why does he use a sqrt(n + 1) instead of just sqrt(n)?
Why wouldn't you also use sqrt(n + 1) when iterating over the even numbers in the first while loop?
How does the algorithm manage to find only prime factors? In the algorithm I first wrote I had a separate test for checking whether a factor was prime, but he doesn't bother.
I suspect the +1 has to do with the imprecision of float (I am not sure whether it's actually required, or is simply a defensive move on the author's part).
The first while loop factors all twos out of n. I don't see how sqrt(n + 1) would fit in there.
If you work from small factor to large factors, you automatically eliminate all composite candidates. Think about it: once you've factored out 5, you've automatically factored out 10, 15, 20 etc. No need to check whether they're prime or not: by that point n will not be divisible by them.
I suspect that checking for primality is what's killing your original algorithm's performance.
I've got, what I think is a valid solution to problem 2 of Project Euler (finding all even numbers in the Fibonacci sequence up to 4,000,000). This works for lower numbers, but crashes when I run it with 4,000,000. I understand that this is computationally difficult, but shouldn't it just take a long time to compute rather than crash? Or is there an issue in my code?
import functools
def fib(limit):
sequence = []
for i in range(limit):
if(i < 3):
sequence.append(i)
else:
sequence.append(sequence[i-1] + sequence[i-2])
return sequence
def add_even(x, y):
if(y % 2 == 0):
return x + y
return x + 0
print(functools.reduce(add_even,fib(4000000)))
The problem is about getting the Fibonacci numbers that are smaller than 4000000. Your code tries to find the first 4000000 Fibonacci values instead. Since Fibonacci numbers grow exponentially, this will reach numbers too large to fit in memory.
You need to change your function to stop when the last calculated value is more than 4000000.
Another possible improvement is to add the numbers as you are calculating them instead of storing them in a list, but this won't be necessary if you stop at the appropriate time.