I'm trying to maximize the Euler Totient function on Python given it can use large arbitrary numbers. The problem is that the program gets killed after some time so it doesn't reach the desired ratio. I have thought of increasing the starting number into a larger number, but I don't think it's prudent to do so. I'm trying to get a number when divided by the totient gets higher than 10. Essentially I'm trying to find a sparsely totient number that fits this criteria.
Here's my phi function:
def phi(n):
amount = 0
for k in range(1, n + 1):
if fractions.gcd(n, k) == 1:
amount += 1
return amount
The most likely candidates for high ratios of N/phi(N) are products of prime numbers. If you're just looking for one number with a ratio > 10, then you can generate primes and only check the product of primes up to the point where you get the desired ratio
def totientRatio(maxN,ratio=10):
primes = []
primeProd = 1
isPrime = [1]*(maxN+1)
p = 2
while p*p<=maxN:
if isPrime[p]:
isPrime[p*p::p] = [0]*len(range(p*p,maxN+1,p))
primes.append(p)
primeProd *= p
tot = primeProd
for f in primes:
tot -= tot//f
if primeProd/tot >= ratio:
return primeProd,primeProd/tot,len(primes)
p += 1 + (p&1)
output:
totientRatio(10**6)
16516447045902521732188973253623425320896207954043566485360902980990824644545340710198976591011245999110,
10.00371973209101,
55
This gives you the smallest number with that ratio. Multiples of that number will have the same ratio.
n = 16516447045902521732188973253623425320896207954043566485360902980990824644545340710198976591011245999110
n*2/totient(n*2) = 10.00371973209101
n*11*13/totient(n*11*13) = 10.00371973209101
No number will have a higher ratio until you reach the next product of primes (i.e. that number multiplied by the next prime).
n*263/totient(n*263) = 10.041901868473037
Removing a prime from the product affects the ratio by a proportion of (1-1/P).
For example if m = n/109, then m/phi(m) = n/phi(n) * (1-1/109)
(n//109) / totient(n//109) = 9.91194248684247
10.00371973209101 * (1-1/109) = 9.91194248684247
This should allow you to navigate the ratios efficiently and find the numbers that meed your need.
For example, to get a number with a ratio that is >= 10 but closer to 10, you can go to the next prime product(s) and remove one or more of the smaller primes to reduce the ratio. This can be done using combinations (from itertools) and will allow you to find very specific ratios:
m = n*263/241
m/totient(m) = 10.000234225865265
m = n*(263...839) / (7 * 61 * 109 * 137) # 839 is 146th prime
m/totient(m) = 10.000000079805726
I have a partial solution for you, but the results don't look good.. (this solution may not give you an answer with modern computer hardware (amount of ram is limiting currently)) I took an answer from this pcg challenge and modified it to spit out ratios of n/phi(n) up to a particular n
import numba as nb
import numpy as np
import time
n = int(2**31)
#nb.njit("i4[:](i4[:])", locals=dict(
n=nb.int32, i=nb.int32, j=nb.int32, q=nb.int32, f=nb.int32))
def summarum(phi):
#calculate phi(i) for i: 1 - n
#taken from <a>https://codegolf.stackexchange.com/a/26753/42652</a>
phi[1] = 1
i = 2
while i < n:
if phi[i] == 0:
phi[i] = i - 1
j = 2
while j * i < n:
if phi[j] != 0:
q = j
f = i - 1
while q % i == 0:
f *= i
q //= i
phi[i * j] = f * phi[q]
j += 1
i += 1
#divide each by n to get ratio n/phi(n)
i = 1
while i < n: #jit compiled while loop is faster than: for i in range(): blah blah blah
phi[i] = i//phi[i]
i += 1
return phi
if __name__ == "__main__":
s1 = time.time()
a = summarum(np.zeros(n, np.int32))
locations = np.where(a >= 10)
print(len(locations))
I only have enough ram on my work comp. to test about 0 < n < 10^8 and the largest ratio was about 6. You may or may not have any luck going up to larger n, although 10^8 already took several seconds (not sure what the overhead was... spyder's been acting strange lately)
p55# is a sparsely totient number satisfying the desired condition.
Furthermore, all subsequent primorial numbers are as well, because pn# / phi(pn#) is a strictly increasing sequence:
p1# / phi(p1#) is 2, which is positive. For n > 1, pn# / phi(pn#) is equal to pn-1#pn / phi(pn-1#pn), which, since pn and pn-1# are coprime, is equal to (pn-1# / phi(pn-1#)) * (pn/phi(pn)). We know pn > phi(pn) > 0 for all n, so pn/phi(pn) > 1. So we have that the sequence pn# / phi(pn#) is strictly increasing.
I do not believe these to be the only sparsely totient numbers satisfying your request, but I don't have an efficient way of generating the others coming to mind. Generating primorials, by comparison, amounts to generating the first n primes and multiplying the list together (whether by using functools.reduce(), math.prod() in 3.8+, or ye old for loop).
As for the general question of writing a phi(n) function, I would probably first find the prime factors of n, then use Euler's product formula for phi(n). As an aside, make sure to NOT use floating-point division. Even finding the prime factors of n by trial division should outperform computing gcd n times, but when working with large n, replacing this with an efficient prime factorization algorithm will pay dividends. Unless you want a good cross to die on, don't write your own. There's one in sympy that I'm aware of, and given the ubiquity of the problem, probably plenty of others around. Time as needed.
Speaking of timing, if this is still relevant enough to you (or a future reader) to want to time... definitely throw the previous answer in the mix as well.
Related
Examples,
1.Input=4
Output=111
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
------------------------
sum = 111(output)
1.Input=5
Output=237
Explanation,
1 = 1³(divisors of 1)
2 = 1³ + 2³(divisors of 2)
3 = 1³ + 3³(divisors of 3)
4 = 1³ + 2³ + 4³(divisors of 4)
5 = 1³ + 5³(divisors of 5)
-----------------------------
sum = 237 (output)
x=int(raw_input().strip())
tot=0
for i in range(1,x+1):
for j in range(1,i+1):
if(i%j==0):
tot+=j**3
print tot
Using this code I can find the answer for small number less than one million.
But I want to find the answer for very large numbers. Is there any algorithm
for how to solve it easily for large numbers?
Offhand I don't see a slick way to make this truly efficient, but it's easy to make it a whole lot faster. If you view your examples as matrices, you're summing them a row at a time. This requires, for each i, finding all the divisors of i and summing their cubes. In all, this requires a number of operations proportional to x**2.
You can easily cut that to a number of operations proportional to x, by summing the matrix by columns instead. Given an integer j, how many integers in 1..x are divisible by j? That's easy: there are x//j multiples of j in the range, so divisor j contributes j**3 * (x // j) to the grand total.
def better(x):
return sum(j**3 * (x // j) for j in range(1, x+1))
That runs much faster, but still takes time proportional to x.
There are lower-level tricks you can play to speed that in turn by constant factors, but they still take O(x) time overall. For example, note that x // j == 1 for all j such that x // 2 < j <= x. So about half the terms in the sum can be skipped, replaced by closed-form expressions for a sum of consecutive cubes:
def sum3(x):
"""Return sum(i**3 for i in range(1, x+1))"""
return (x * (x+1) // 2)**2
def better2(x):
result = sum(j**3 * (x // j) for j in range(1, x//2 + 1))
result += sum3(x) - sum3(x//2)
return result
better2() is about twice as fast as better(), but to get faster than O(x) would require deeper insight.
Quicker
Thinking about this in spare moments, I still don't have a truly clever idea. But the last idea I gave can be carried to a logical conclusion: don't just group together divisors with only one multiple in range, but also those with two multiples in range, and three, and four, and ... That leads to better3() below, which does a number of operations roughly proportional to the square root of x:
def better3(x):
result = 0
for i in range(1, x+1):
q1 = x // i
# value i has q1 multiples in range
result += i**3 * q1
# which values have i multiples?
q2 = x // (i+1) + 1
assert x // q1 == i == x // q2
if i < q2:
result += i * (sum3(q1) - sum3(q2 - 1))
if i+1 >= q2: # this becomes true when i reaches roughly sqrt(x)
break
return result
Of course O(sqrt(x)) is an enormous improvement over the original O(x**2), but for very large arguments it's still impractical. For example better3(10**6) appears to complete instantly, but better3(10**12) takes a few seconds, and better3(10**16) is time for a coffee break ;-)
Note: I'm using Python 3. If you're using Python 2, use xrange() instead of range().
One more
better4() has the same O(sqrt(x)) time behavior as better3(), but does the summations in a different order that allows for simpler code and fewer calls to sum3(). For "large" arguments, it's about 50% faster than better3() on my box.
def better4(x):
result = 0
for i in range(1, x+1):
d = x // i
if d >= i:
# d is the largest divisor that appears `i` times, and
# all divisors less than `d` also appear at least that
# often. Account for one occurence of each.
result += sum3(d)
else:
i -= 1
lastd = x // i
# We already accounted for i occurrences of all divisors
# < lastd, and all occurrences of divisors >= lastd.
# Account for the rest.
result += sum(j**3 * (x // j - i)
for j in range(1, lastd))
break
return result
It may be possible to do better by extending the algorithm in "A Successive Approximation Algorithm for Computing the Divisor Summatory Function". That takes O(cube_root(x)) time for the possibly simpler problem of summing the number of divisors. But it's much more involved, and I don't care enough about this problem to pursue it myself ;-)
Subtlety
There's a subtlety in the math that's easy to miss, so I'll spell it out, but only as it pertains to better4().
After d = x // i, the comment claims that d is the largest divisor that appears i times. But is that true? The actual number of times d appears is x // d, which we did not compute. How do we know that x // d in fact equals i?
That's the purpose of the if d >= i: guarding that comment. After d = x // i we know that
x == d*i + r
for some integer r satisfying 0 <= r < i. That's essentially what floor division means. But since d >= i is also known (that's what the if test ensures), it must also be the case that 0 <= r < d. And that's how we know x // d is i.
This can break down when d >= i is not true, which is why a different method needs to be used then. For example, if x == 500 and i == 51, d (x // i) is 9, but it's certainly not the case that 9 is the largest divisor that appears 51 times. In fact, 9 appears 500 // 9 == 55 times. While for positive real numbers
d == x/i
if and only if
i == x/d
that's not always so for floor division. But, as above, the first does imply the second if we also know that d >= i.
Just for Fun
better5() rewrites better4() for about another 10% speed gain. The real pedagogical point is to show that it's easy to compute all the loop limits in advance. Part of the point of the odd code structure above is that it magically returns 0 for a 0 input without needing to test for that. better5() gives up on that:
def isqrt(n):
"Return floor(sqrt(n)) for int n > 0."
g = 1 << ((n.bit_length() + 1) >> 1)
d = n // g
while d < g:
g = (d + g) >> 1
d = n // g
return g
def better5(x):
assert x > 0
u = isqrt(x)
v = x // u
return (sum(map(sum3, (x // d for d in range(1, u+1)))) +
sum(x // i * i**3 for i in range(1, v)) -
u * sum3(v-1))
def sum_divisors(n):
sum = 0
i = 0
for i in range (1, n) :
if n % i == 0 and n != 0 :
sum = sum + i
# Return the sum of all divisors of n, not including n
return sum
print(sum_divisors(0))
# 0
print(sum_divisors(3)) # Should sum of 1
# 1
print(sum_divisors(36)) # Should sum of 1+2+3+4+6+9+12+18
# 55
print(sum_divisors(102)) # Should be sum of 2+3+6+17+34+51
# 114
I have a simple math algo. All it does is it takes an input and finds i,j such that i^2 + j^2 = input with the restriction that j >= i (so that it doesn't print it's counterpart e.g., 2^2 + 3^2 == 3^2 + 2^2 but I only need the latter as j >= i)
For my code, I did the following: I have 2 for loops, first loop for i and second loop for j. Takes both i and j values and test if i^2 + j^2 == input and if j >= i. if yes, print it and update count.
The problem is, with large sums of values, it takes a very long time as it loops twice from 1 to 2000 and then 1 to 2000 again.
def some_mathfn(n):
count = 0
for i in range(1,n+1):
for j in range(1,n+1):
if(i**2 + j**2 == n and j >= i):
g = print(i, '^2 + ', j,'^2')
count += 1
return count
some_mathfn(2001)
You've got an O(n2) algorithm for no obvious reason. It's easy to make this O(n1/2)...
Loop from 1 to the square root of n/2 (for variable i) - because when i is greater than sqrt(n/2) then i*i + j*j will be greater than n for any j greater than i.
(Only to the square root of n, because
Subtract the square of i
Take the square root of the result, and find the nearest integer - call that j
Check whether the condition you're interested in holds
The last two steps are effectively just checking that the square root of n - i*i is actually an integer, but in some cases (for very large values of n) finding the nearest integer and then checking the condition could be a more reliable approach, in order to avoid floating point limitations causing issues, where the nearest-representable double to the theoretical result could be an integer, despite that actual result not being an integer. This would only happen for really large values of n, but...
I need an explanation for the program suggested in the edit in the first answer over here. It is a program to find the totients of a range of numbers. Can somebody provide a simple explanation? (Ignore the summation part for now, I need to find out how the init method finds the totients.) I know there is an explanation in the answer, but that is an explanation for different programs, I need an explanation for this particular one.
class Totient:
def __init__(self, n):
self.totients = [1 for i in range(n)]
for i in range(2, n):
if self.totients[i] == 1:
for j in range(i, n, i):
self.totients[j] *= i - 1
k = j / i
while k % i == 0:
self.totients[j] *= i
k /= i
def __call__(self, i):
return self.totients[i]
if __name__ == '__main__':
from itertools import imap
totient = Totient(10000)
print sum(imap(totient, range(10000)))
It's a variant of the Sieve of Eratosthenes for finding prime numbers.
If you want to know the totient of a single number n, the best way to find it is to factor n and take the product of 1 less than each factor; for instance, 30 = 2 * 3 * 5, and subtracting 1 from each factor, then multiplying, gives the totient 1 * 2 * 4 = 8. But if you want to find the totients of all the numbers less than a given n, a better approach than factoring each of them is sieving. The idea is simple: Set up an array X from 0 to n, store i in each Xi, then run through the array starting from 0 and whenever Xi = i loop over the multiples of i, multiplying each by i − 1.
Further discussion and code at my blog.
I'm not completely sure what the code is doing -- but frankly it looks pretty bad. It clearly is trying to use that Euler's totient function is multiplicative, meaning that a,b are relatively prime then t(a,b) = t(a)*t(b), together with the fact that if p is a prime then t(p) = p-1. But -- it seems to be using crude trial division to determine such things. If you really want to calculate the totient of all numbers in a given range then you should use an algorithm that sieves the numbers as you go along.
Here is a version which sieves as it goes and exploits the multiplicative nature to the hilt. At each pass through the main loop it starts with a prime, p which hasn't yet been processed. It determines all powers of p <= n and then uses a direct formula for these powers (see https://en.wikipedia.org/wiki/Euler%27s_totient_function ). Once these totients have been added, it forms all possible products <= n of these powers and the numbers for which the totients have been previously computed. This gives a whole slew of numbers to add to the list of previously determined numbers. At most sqrt(n) passes need to be made through the main loop. It runs almost instantly for n = 10000. It returns a list where the ith value is the totient of i (with t(0) = 0 for convenience):
def allTotients(n):
totients = [None]*(n+1) #totients[i] will contain the t(i)
totients[0] = 0
totients[1] = 1
knownTotients = [] #known in range 2 to n
p = 2
while len(knownTotients) < n - 1:
powers = [p]
k = 2
while p ** k <= n:
powers.append(p ** k)
k +=1
totients[p] = p - 1
for i in range(1,len(powers)):
totients[powers[i]] = powers[i] - powers[i-1]
#at this stage powers represent newly discovered totients
#combine with previously discovered totients to get still more
newTotients = powers[:]
for m in knownTotients:
for pk in powers:
if m*pk > n: break
totients[m*pk] = totients[m]*totients[pk]
newTotients.append(m*pk)
knownTotients.extend(newTotients)
#if there are any unkown totients -- the smallest such will be prime
if len(knownTotients) < n-1:
p = totients.index(None)
return totients
For completeness sake, here is a Python implementation of the algorithm to compute the totient of a single number which user448810 described in their answer:
from math import sqrt
#crude factoring algorithm:
small_primes = [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,
53,59,61,67,71,73,79,83,89,97]
def factor(n):
#returns a list of prime factors
factors = []
num = n
#first pull out small prime factors
for p in small_primes:
while num % p == 0:
factors.append(p)
num = num // p
if num == 1: return factors
#now do trial division, starting at 101
k = 101
while k <= sqrt(num):
while num % k == 0:
factors.append(k)
num = num // k
k += 2
if num == 1:
return factors
else:
factors.append(num)
return factors
def totient(n):
factors = factor(n)
unique_factors = set()
t = 1
for p in factors:
if p in unique_factors:
t *= p
else:
unique_factors.add(p)
t *= (p-1)
return t
This is the code I created to find the largest power of 2 factor. I do not think that this is 100% correct because I keep getting 2 as the answer. I need some help figuring this out. I am completely new at programming.
MY CODE:
def largestPowerOfTwoThatIsAFactorOf(num):
factor = 2
while not(num > 0):
factor = factor + 1
return factor
print(largestPowerOfTwoThatIsAFactorOf(4))
print(largestPowerOfTwoThatIsAFactorOf(15))
print(largestPowerOfTwoThatIsAFactorOf(120))
#For any odd integer, largest power of 2 that’s a factor is 1.
This is an interesting and useful function, fit to deal with FFT, to perform Signal Processing and Analysis once FFT is a square matrix with "power of two" dimensions... understanding that a power of two is the result of two elevated to some power, and that there are infinite powers of two greater than n, a better name to the function should be minimum power of two greater than n - we just use it to dimension a collection of signal data to submit it to FFT filter. There follows two options, named minpowof2 and the maxpowof2...
def minpowof2(n):
'''Minimun Power of Two Greater Than n'''
f = 1
if n < 2: return 'try n greater or iqual to 2'
while n > 2:
n /= 2
f += 1
return 2**f
def maxpowof2(n):
'''Maximum Power of Two Lower than n'''
return int(minpot2(n)/2)
def largestPowerOfTwoThatIsAFactorOf(num):
if num % 2 != 0: return 1
factor = 0
while num % 2 == 0:
num /= 2
factor += 1
return 2 ** factor
## or return factor; as per your requirement
You need to update num inside the loop. Also, you cna optimize the code a little by checking whether the input was odd or not in the first statement.
I'm relatively new to python and I'm confused about the performance of two relatively simple blocks of code. The first function generates a prime factorization of a number n given a list of primes. The second generates a list of all factors of n. I would have though prime_factor would be faster than factors (for the same n), but this is not the case. I'm not looking for better algorithms, but rather I would like to understand why prime_factor is so much slower than factors.
def prime_factor(n, primes):
prime_factors = []
i = 0
while n != 1:
if n % primes[i] == 0:
factor = primes[i]
prime_factors.append(factor)
n = n // factor
else: i += 1
return prime_factors
import math
def factors(n):
if n == 0: return []
factors = {1, n}
for i in range(2, math.floor(n ** (1/2)) + 1):
if n % i == 0:
factors.add(i)
factors.add(n // i)
return list(factors)
Using the timeit module,
{ i:factors(i) for i in range(1, 10000) } takes 2.5 seconds
{ i:prime_factor(i, primes) for i in range(1, 10000) } takes 17 seconds
This is surprising to me. factors checks every number from 1 to sqrt(n), while prime_factor only checks primes. I would appreciate any help in understanding the performance characteristics of these two functions.
Thanks
Edit: (response to roliu)
Here is my code to generate a list of primes from 2 to up_to:
def primes_up_to(up_to):
marked = [0] * up_to
value = 3
s = 2
primes = [2]
while value < up_to:
if marked[value] == 0:
primes.append(value)
i = value
while i < up_to:
marked[i] = 1
i += value
value += 2
return primes
Without seeing what you used for primes, we have to guess (we can't run your code).
But a big part of this is simply mathematics: there are (very roughly speaking) about n/log(n) primes less than n, and that's a lot bigger than sqrt(n). So when you pass a prime, prime_factor(n) does a lot more work: it goes through O(n/log(n)) operations before finding the first prime factor (n itself!), while factors(n) gives up after O(sqrt(n)) operations.
This can be very significant. For example, sqrt(10000) is just 100, but there are 1229 primes less than 10000. So prime_factor(n) can need to do over 10 times as much work to deal with the large primes in your range.