Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Recently I found a puzzle that required me to list all cyclic primes below a number.
In this context cyclic means that if we rotate the digits it is still prime:
eg.
1193 is prime
1931 is prime
9311 is prime
3119 is prime
This is the code I origanly wrote:
a=[]
upto=1000000
for x in range(upto):
a.append([x,0])
print('generated table')
a[1][1]=1
a[0][1]=1
for n in range(2,int(math.sqrt(upto))):
for k in range(2,(int(upto/n)+2)):
try:
a[n*k][1]=1
except IndexError:
pass
print('sive complete')
p=[]
for e in a:
if (e[1]==0):
p.append(e[0])
print('primes generated')
s=[]
for e in p:
pr=True
w=str(e)
if all(c not in w for c in ['2','4','6','8','5','0']):
for x in (w[i:]+w[:i] for i in range(len(w))):
if int(x) not in p:
pr=False
if pr==True:
s.append(e)
print('found',e)
print(s)
It was fairly slow! (about 12s) I know, the prime generation isn't perfect but, the final bit is the slowest. I knew that this process for upto=10e6 can be done in under a second, so after some research I removed any string manipulations in favor of this function:
def rotate(n):
prev=[]
for l in range(6,0,-1):
if(n<10**l):
length=l
while(n not in prev):
prev.append(n)
n=(n // 10) + (n % 10) * 10**(length-1)
yield n
I also removed the 5,0,2,4,6,8 testing as I didn't know how to implement it. The result? It runs even slower! (over ten minutes, I guess the 5,0,2,4,6,8 testing was a good idea)
I tried using time.time() but I didn't find anything terribly inefficient (in the first code). How is it possible to improve this code? Are there any bad practices I'm currently using?
Here is some optimized code:
import math
upto = 1000000
a = [True] * upto
p = []
for n in xrange(2,upto):
if a[n]:
p.append(n)
for k in xrange(2,(upto+n-1)//n):
a[k*n] = False
print('primes generated')
s = []
p = set(p)
for e in p:
pr=True
w=str(e)
if all(c not in w for c in ['2','4','6','8','5','0']):
for x in (w[i:]+w[:i] for i in range(len(w))):
if int(x) not in p:
pr=False
break
if pr:
s.append(e)
print(s)
most important optimizations:
simplified the sieve code
converted the list of primes into a set. This makes the test x in p be logaritmic instead of linear
added a break statement when found a non prime rotation
added cleaner (but equivalent) code:
import math
upto=1000000
sieve = [True] * upto
primes = set()
for n in xrange(2,upto):
if sieve[n]:
primes.add(n)
for k in xrange(2,(upto+n-1)//n):
sieve[k*n] = False
def good(e):
w = str(e)
for c in w:
if c not in '1379':
return False
for i in xrange(1,len(w)):
x = int(w[i:]+w[:i])
if x not in primes:
return False
return True
print filter(good,primes)
You can cut down on the time required for the first test by doing a set comparison instead of doing the full iteration each time like so:
flags = set('246850')
if not set(str(e)).intersection(flags):
# etc...
Which not only scales logarithmically, but also lets you pick up another factor of two on this step. You can even speed this up further and make it a little more elegant by transitioning it over to a generator that you can then use to do the final check like so:
flags = set('246850')
primes = set(p)
easy_checks = (str(prime) for prime in primes if not set(str(prime)).intersection(flags))
Finally you can rewrite that final bit to get rid of all the appending and whatnot, which tends to be super slow like so:
test = lambda number: any((int(number[i:]+number[:i]) in primes for i in xrange(len(number))))
final = [number for number in easy_checks if test(number)]
Related
The program shows "OverflowError: Python int too large to convert to C ssize_t" for large integer inputs(which are mandatory to test the efficiency of the program for all the boundary cases).How do I deal with this error?
import random
import sys
sys.setrecursionlimit(10**6)
t=int(input())
N =[]
K =[]
B =[]
while 1<=t<=20 :
n,k,b= input().split()
n,k,b = [int(n), int(k),int(b)]
t=t-1
N.append(n)
K.append(k)
B.append(b)
if b >= 1 and b <= (10**5) and n >= 1 and k <= (10**18) and b <= k :
i1=0
for val in K:
n=N[i1]
k=K[i1]
b=B[i1]
i1=i1+1
print('i entered for loop')
if sum(list(range(1, k+1))) >= n:
print(' i entered if loop')
def possibilities():
p = random.sample(range(1, k+1), b)
if sum(p) == n:
for i in range(0,b):
print(p[i],end=" ")
print("\r")
else:
possibilities()
possibilities()
else:
print(-1)
as per the docs, random.randrange works
over an arbitrarily large range
so instead of doing:
p = random.sample(range(1, k+1), b)
you could do the following:
p = [random.randrange(1, k+1) for _ in range(b)]
and have it work for arbitrarily large values of k. note that when k is smaller than 2**63-1 (assuming you're using a 64bit machine) using sample is likely going to be faster.
as an example of where your code was apparently failing, random.sample(range(2**63), 1) gives
OverflowError: Python int too large to convert to C ssize_t
while [random.randrange(2**63) for _ in range(10)] gives me a list containing a single large number.
I also note that:
the distributions involved in your code would seem to allow you to give a nice analytical answer in O(t) time
you might also want to improve your checks to stop it running forever, e.g. n,k,b = [2,2,2] would seem to pass all your checks but will spin forever
using a for loop would be much more efficient than recursion given that Python isn't "tail recursive"
I'm trying to complete the following challenge: https://app.codesignal.com/challenge/ZGBMLJXrFfomwYiPs.
I have written code that appears to work, however, it is so inefficient that it fails the test (too long to execute and uses too much memory). Are there any ways I can make this more efficient? I'm quite new to building efficient scripts. Someone mentioned "map()" can be used in lieu of "for i in range(1, n)". Thank you Xero Smith and others for the suggestions of optimising it this far:
from functools import reduce
from operator import mul
from itertools import combinations
# Starting from the maximum, we can divide our bag combinations to see the total number of integer factors
def prime_factors(n):
p = 2
dct = {}
while n != 1:
if n % p:
p += 1
else:
dct[p] = dct.get(p, 0) + 1
n = n//p
return dct
def number_of_factors(n):
return reduce(mul, (i+1 for i in prime_factors(n).values()), 1)
def kinderLevon(bags):
candies = list()
for x in (combinations(bags, i) for i in range(1, len(bags)+1)):
for j in x:
candies.append(sum(j))
satisfied_kids = [number_of_factors(i) for i in candies]
return candies[satisfied_kids.index(max(satisfied_kids))]
Any help would be greatly appreciated.
Thanks,
Aaron
Following my comment, I can already identify a memory & complexity improvement. In your factors function since you only need the number of factors, you could only count them instead of storing them.
def factors(n):
k = 2
for i in range(2, n//2 +1):
if n % i == 0:
k += 1
return k
EDIT: as suggested in the comments stop the counter earlier.
This actually reduces time complexity for huge numbers, but not really for smaller ones.
This is a much better improvement than the one using list comprehensions (that still allocates memory)
Moreover, it is pointless to allocate your combinations list twice. You're doing
x = list(combinations(bags, i));
for j in list(x):
...
The first line you convert a tuple (returned by combinations) into a list, hence duplicating the data. The second line list(x) re-allocates a copy of the list, taking even more memory! There you should really just write:
for j in combination(bags, i):
...
As a matter of syntax, please don't use semicolons ; in Python !
First things first, combinations are iterable. This means you do not have to convert them into lists before you iterate over them; infact it is terribly inefficient to do so.
Next thing that can be improved significantly is your factors procedure. Currently it is linear. We can do better. We can get the number of factors of an integer N via the following algorithm:
get the prime factorisation of Nsuch that N = p1^n1 * p2^n2 * ...
the number of factors of N is (1+n1) * (1+n2) * ...
see https://www.wikihow.com/Find-How-Many-Factors-Are-in-a-Number for details.
Something else, your current solution has a lot of variables and computations that are not used. Get rid of them.
With these, we get the following which should work:
from functools import reduce
from operator import mul
from itertools import combinations
# Starting from the maximum, we can divide our bag combinations to see the total number of integer factors
def prime_factors(n):
p = 2
dct = {}
while n != 1:
if n % p:
p += 1
else:
dct[p] = dct.get(p, 0) + 1
n = n//p
return dct
def number_of_factors(n):
return reduce(mul, (i+1 for i in prime_factors(n).values()), 1)
def kinderLevon(bags):
candies = list()
for x in (combinations(bags, i) for i in range(1, len(bags)+1)):
for j in x:
candies.append(sum(j))
satisfied_kids = [number_of_factors(i) for i in candies]
return candies[satisfied_kids.index(max(satisfied_kids))]
Use list comprehensions. The factors function can be transformed like this :
def factors(n):
return len([i for i in range(1, n + 1) if n % i == 0])
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 9 months ago.
Improve this question
I am practicing list comprehensions and nested list comprehensions. As part of my practice I am writing out equivalent for loops. This for loop I cannot get right, and I believe it's because I'm trying to assign a value rather than a variable in a function call. The error I receive is:
File "<stdin>", line 4
SyntaxError: can't assign to function call
The code I have written for this loop is:
import math
def squared_primes():
list = []
for x in range(1,1000000):
for q in range(2,math.sqrt(x)+1):
if all(x % q != 0):
list.append(x**2)
print(list)
This function is trying to create a list of perfect squares whose roots are prime numbers in the range 1 to 1000000.
Can someone help me understand where exactly the syntax of my loop breaks down? Also, can I possibly do this as a nested list comprehension? Clearly my list comprehension is breaking down because I can't get my for loop syntax right...
SOLUTION: Thanks to user #Evan, I was able to fix the variable and syntax problems, and took some cues about how to fix the all() statement from this thread.
This code will properly return a list of the squared primes from 1,1000:
def squared_primes():
list1 = []
for x in range(1,1000):
if all(x%q !=0 for q in range(2,int(math.sqrt(x)+1))):
list1.append(x**2)
print(list1)
This code will properly return a list of the squared primes from
1,1000:
Except that it returns 1 as the first element of the list and 1's square root isn't a prime. Let's fix that glitch and rewrite the code as a proper function:
from math import sqrt
def squared_primes(maximum):
primes = []
for number in range(2, maximum):
if all(number % divisor != 0 for divisor in range(2, int(sqrt(number)) + 1)):
primes.append(number ** 2)
return primes
print(squared_primes(1000))
BTW, this is not a list comprehension:
all(x % q !=0 for q in range(2, int(math.sqrt(x) + 1)))
it's a generator! If you wanted a list comprehension you would have done:
all([x % q !=0 for q in range(2, int(math.sqrt(x) + 1))])
but stick with the generator as it fails composites with less effort.
Your code will start to bog down when we ask for a list of the squares up to 1000000 (a million) or more. That's when we'll want a more efficient sieve-based algorithm like:
def squared_primes(maximum):
sieve = [True] * maximum
if maximum > 0:
sieve[0] = False # zero is not a prime
if maximum > 1:
sieve[1] = False # one is not a prime
for index in range(2, int(maximum ** 0.5) + 1):
if sieve[index]:
prime = index
for multiple in range(prime + prime, maximum, prime):
sieve[multiple] = False
return [index * index for index in range(maximum) if sieve[index]]
At around a million, this code will return results about 20x faster than your division-based solution.
And #Evan's glorious comprehension, since it lacks your math.sqrt() optimization, will be orders of magnitude slower than either (I'm still waiting for it to finish for one million) and starts the list with two incorrect results. We can put it on a par time-wise with your revised code by doing:
from math import sqrt
def squared_primes(maximum):
return [number ** 2 for number in range(2, maximum) if all(number % divisor for divisor in range(2, int(sqrt(number)) + 1))]
print(squared_primes(1000))
And this is a list comprehension. But again, the wrong approach so go back and look at the sieve-based implementation.
This is pretty concise. List comprehensions are glorious.
def squared_primes(maximum):
return( [ x**2 for x in range(0,maximum) if all( x % i for i in range(2, x) ) ] )
print(squared_primes(1000000))
I currently have ↓ set as my randprime(p,q) function. Is there any way to condense this, via something like a genexp or listcomp? Here's my function:
n = randint(p, q)
while not isPrime(n):
n = randint(p, q)
It's better to just generate the list of primes, and then choose from that line.
As is, with your code there is the slim chance that it will hit an infinite loop, either if there are no primes in the interval or if randint always picks a non-prime then the while loop will never end.
So this is probably shorter and less troublesome:
import random
primes = [i for i in range(p,q) if isPrime(i)]
n = random.choice(primes)
The other advantage of this is there is no chance of deadlock if there are no primes in the interval. As stated this can be slow depending on the range, so it would be quicker if you cached the primes ahead of time:
# initialising primes
minPrime = 0
maxPrime = 1000
cached_primes = [i for i in range(minPrime,maxPrime) if isPrime(i)]
#elsewhere in the code
import random
n = random.choice([i for i in cached_primes if p<i<q])
Again, further optimisations are possible, but are very much dependant on your actual code... and you know what they say about premature optimisations.
Here is a script written in python to generate n random prime integers between tow given integers:
import numpy as np
def getRandomPrimeInteger(bounds):
for i in range(bounds.__len__()-1):
if bounds[i + 1] > bounds[i]:
x = bounds[i] + np.random.randint(bounds[i+1]-bounds[i])
if isPrime(x):
return x
else:
if isPrime(bounds[i]):
return bounds[i]
if isPrime(bounds[i + 1]):
return bounds[i + 1]
newBounds = [0 for i in range(2*bounds.__len__() - 1)]
newBounds[0] = bounds[0]
for i in range(1, bounds.__len__()):
newBounds[2*i-1] = int((bounds[i-1] + bounds[i])/2)
newBounds[2*i] = bounds[i]
return getRandomPrimeInteger(newBounds)
def isPrime(x):
count = 0
for i in range(int(x/2)):
if x % (i+1) == 0:
count = count+1
return count == 1
#ex: get 50 random prime integers between 100 and 10000:
bounds = [100, 10000]
for i in range(50):
x = getRandomPrimeInteger(bounds)
print(x)
So it would be great if you could use an iterator to give the integers from p to q in random order (without replacement). I haven't been able to find a way to do that. The following will give random integers in that range and will skip anything that it's tested already.
import random
fail = False
tested = set([])
n = random.randint(p,q)
while not isPrime(n):
tested.add(n)
if len(tested) == p-q+1:
fail = True
break
while n in s:
n = random.randint(p,q)
if fail:
print 'I failed'
else:
print n, ' is prime'
The big advantage of this is that if say the range you're testing is just (14,15), your code would run forever. This code is guaranteed to produce an answer if such a prime exists, and tell you there isn't one if such a prime does not exist. You can obviously make this more compact, but I'm trying to show the logic.
next(i for i in itertools.imap(lambda x: random.randint(p,q)|1,itertools.count()) if isPrime(i))
This starts with itertools.count() - this gives an infinite set.
Each number is mapped to a new random number in the range, by itertools.imap(). imap is like map, but returns an iterator, rather than a list - we don't want to generate a list of inifinite random numbers!
Then, the first matching number is found, and returned.
Works efficiently, even if p and q are very far apart - e.g. 1 and 10**30, which generating a full list won't do!
By the way, this is not more efficient than your code above, and is a lot more difficult to understand at a glance - please have some consideration for the next programmer to have to read your code, and just do it as you did above. That programmer might be you in six months, when you've forgotten what this code was supposed to do!
P.S - in practice, you might want to replace count() with xrange (NOT range!) e.g. xrange((p-q)**1.5+20) to do no more than that number of attempts (balanced between limited tests for small ranges and large ranges, and has no more than 1/2% chance of failing if it could succeed), otherwise, as was suggested in another post, you might loop forever.
PPS - improvement: replaced random.randint(p,q) with random.randint(p,q)|1 - this makes the code twice as efficient, but eliminates the possibility that the result will be 2.
I know there's already a question similar to this, but I want to speed it up using GMPY2 (or something similar with GMP).
Here is my current code, it's decent but can it be better?
Edit: new code, checks divisors 2 and 3
def factors(n):
result = set()
result |= {mpz(1), mpz(n)}
def all_multiples(result, n, factor):
z = mpz(n)
while gmpy2.f_mod(mpz(z), factor) == 0:
z = gmpy2.divexact(z, factor)
result |= {mpz(factor), z}
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, gmpy2.isqrt(n) + 1, 6):
i1 = mpz(i) + 1
i2 = mpz(i) + 5
div1, mod1 = gmpy2.f_divmod(n, i1)
div2, mod2 = gmpy2.f_divmod(n, i2)
if mod1 == 0:
result |= {i1, div1}
if mod2 == 0:
result |= {i2, div2}
return result
If it's possible, I'm also interested in an implementation with divisors only within n^(1/3) and 2^(2/3)*n(1/3)
As an example, mathematica's factor() is much faster than the python code. I want to factor numbers between 20 and 50 decimal digits. I know ggnfs can factor these in less than 5 seconds.
I am interested if any module implementing fast factorization exists in python too.
I just made some quick changes to your code to eliminate redundant name lookups. The algorithm is still the same but it is about twice as fast on my computer.
import gmpy2
from gmpy2 import mpz
def factors(n):
result = set()
n = mpz(n)
for i in range(1, gmpy2.isqrt(n) + 1):
div, mod = divmod(n, i)
if not mod:
result |= {mpz(i), div}
return result
print(factors(12345678901234567))
Other suggestions will need more information about the size of the numbers, etc. For example, if you need all the possible factors, it may be faster to construct those from all the prime factors. That approach will let you decrease the limit of the range statement as you proceed and also will let you increment by 2 (after removing all the factors of 2).
Update 1
I've made some additional changes to your code. I don't think your all_multiplies() function is correct. Your range() statement isn't optimal since 2 is check again but my first fix made it worse.
The new code delays computing the co-factor until it knows the remainder is 0. I also tried to use the built-in functions as much as possible. For example, mpz % integer is faster than gmpy2.f_mod(mpz, integer) or gmpy2.f_mod(integer, mpz) where integer is a normal Python integer.
import gmpy2
from gmpy2 import mpz, isqrt
def factors(n):
n = mpz(n)
result = set()
result |= {mpz(1), n}
def all_multiples(result, n, factor):
z = n
f = mpz(factor)
while z % f == 0:
result |= {f, z // f}
f += factor
return result
result = all_multiples(result, n, 2)
result = all_multiples(result, n, 3)
for i in range(1, isqrt(n) + 1, 6):
i1 = i + 1
i2 = i + 5
if not n % i1:
result |= {mpz(i1), n // i1}
if not n % i2:
result |= {mpz(i2), n // i2}
return result
print(factors(12345678901234567))
I would change your program to just find all the prime factors less than the square root of n and then construct all the co-factors later. Then you decrease n each time you find a factor, check if n is prime, and only look for more factors if n isn't prime.
Update 2
The pyecm module should be able to factor the size numbers you are trying to factor. The following example completes in about a second.
>>> import pyecm
>>> list(pyecm.factors(12345678901234567890123456789012345678901, False, True, 10, 1))
[mpz(29), mpz(43), mpz(43), mpz(55202177), mpz(2928109491677), mpz(1424415039563189)]
There exist different Python factoring modules in the Internet. But if you want to implement factoring yourself (without using external libraries) then I can suggest quite fast and very easy to implement Pollard-Rho Algorithm. I implemented it fully in my code below, you just scroll down directly to my code (at the bottom of answer) if you don't want to read.
With great probability Pollard-Rho algorithm finds smallest non-trivial factor P (not equal to 1 or N) within time of O(Sqrt(P)). To compare, Trial Division algorithm that you implemented in your question takes O(P) time to find factor P. It means for example if a prime factor P = 1 000 003 then trial division will find it after 1 000 003 division operations, while Pollard-Rho on average will find it just after 1 000 operations (Sqrt(1 000 003) = 1 000), which is much much faster.
To make Pollard-Rho algorithm much faster we should be able to detect prime numbers, to exclude them from factoring and don't wait unnecessarily time, for that in my code I used Fermat Primality Test which is very fast and easy to implement within just 7-9 lines of code.
Pollard-Rho algorithm itself is very short, 13-15 lines of code, you can see it at the very bottom of my pollard_rho_factor() function, the remaining lines of code are supplementary helpers-functions.
I implemented all algorithms from scratch without using extra libraries (except random module). That's why you can see my gcd() function there although you can use built-in Python's math.gcd() instead (which finds Greatest Common Divisor).
You can see function Int() in my code, it is used just to convert Python's integers to GMPY2. GMPY2 ints will make algorithm faster, you can just use Python's int(x) instead. I didn't use any specific GMPY2 function, just converted all ints to GMPY2 ints to have around 50% speedup.
As an example I factor first 190 digits of Pi!!! It takes 3-15 seconds to factor them. Pollard-Rho algorithm is randomized hence it takes different time to factor same number on each run. You can restart program again and see that it will print different running time.
Of course factoring time depends greatly on size of prime divisors. Some 50-200 digits numbers can be factoring within fraction of second, some will take months. My example 190 digits of Pi has quite small prime factors, except largest one, that's why it is fast. Other digits of Pi may be not that fast to factor. So digit-size of number doesn't matter very much, only size of prime factors matter.
I intentionally implemented pollard_rho_factor() function as one standalone function, without breaking it into smaller separate functions. Although it breaks Python's style guide, which (as I remember) suggests not to have nested functions and place all possible functions at global scope. Also style guide suggests to do all imports at global scope in first lines of script. I did single function intentionally so that it is easy copy-pastable and fully ready to use in your code. Fermat primality test is_fermat_probable_prime() sub-function is also copy pastable and works without extra dependencies.
In very rare cases Pollard-Rho algorithm may fail to find non-trivial prime factor, especially for very small factors, for example you can replace n inside test() with small number 4 and see that Pollard-Rho fails. For such small failed factors you can easily use your Trial Division algorithm that you implemented in your question.
Try it online!
def pollard_rho_factor(N, *, trials = 16):
# https://en.wikipedia.org/wiki/Pollard%27s_rho_algorithm
import math, random
def Int(x):
import gmpy2
return gmpy2.mpz(x) # int(x)
def is_fermat_probable_prime(n, *, trials = 32):
# https://en.wikipedia.org/wiki/Fermat_primality_test
import random
if n <= 16:
return n in (2, 3, 5, 7, 11, 13)
for i in range(trials):
if pow(random.randint(2, n - 2), n - 1, n) != 1:
return False
return True
def gcd(a, b):
# https://en.wikipedia.org/wiki/Greatest_common_divisor
# https://en.wikipedia.org/wiki/Euclidean_algorithm
while b != 0:
a, b = b, a % b
return a
def found(f, prime):
print(f'Found {("composite", "prime")[prime]} factor, {math.log2(f):>7.03f} bits... {("Pollard-Rho failed to fully factor it!", "")[prime]}')
return f
N = Int(N)
if N <= 1:
return []
if is_fermat_probable_prime(N):
return [found(N, True)]
for j in range(trials):
i, stage, y, x = 0, 2, Int(1), Int(random.randint(1, N - 2))
while True:
r = gcd(N, abs(x - y))
if r != 1:
break
if i == stage:
y = x
stage <<= 1
x = (x * x + 1) % N
i += 1
if r != N:
return sorted(pollard_rho_factor(r) + pollard_rho_factor(N // r))
return [found(N, False)] # Pollard-Rho failed
def test():
import time
# http://www.math.com/tables/constants/pi.htm
# pi = 3.
# 1415926535 8979323846 2643383279 5028841971 6939937510 5820974944 5923078164 0628620899 8628034825 3421170679
# 8214808651 3282306647 0938446095 5058223172 5359408128 4811174502 8410270193 8521105559 6446229489 5493038196
# n = first 190 fractional digits of Pi
n = 1415926535_8979323846_2643383279_5028841971_6939937510_5820974944_5923078164_0628620899_8628034825_3421170679_8214808651_3282306647_0938446095_5058223172_5359408128_4811174502_8410270193_8521105559_6446229489
tb = time.time()
print('N:', n)
print('Factors:', pollard_rho_factor(n))
print(f'Time: {time.time() - tb:.03f} sec')
test()
Output:
N: 1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489
Found prime factor, 1.585 bits...
Found prime factor, 6.150 bits...
Found prime factor, 20.020 bits...
Found prime factor, 27.193 bits...
Found prime factor, 28.311 bits...
Found prime factor, 545.087 bits...
Factors: [mpz(3), mpz(71), mpz(1063541), mpz(153422959), mpz(332958319), mpz(122356390229851897378935483485536580757336676443481705501726535578690975860555141829117483263572548187951860901335596150415443615382488933330968669408906073630300473)]
Time: 2.963 sec