I was bored at work and was playing with some math and python coding, when I noticed the following:
Recursively (or if using a for loop) you simply add integers together to get a given Fibonacci number. However there is also a direct equation for calculating Fibonacci numbers, and for large n this equation will give answers that are, frankly, quite wrong with respect to the recursively calculated Fibonacci number.
I imagine this is due to rounding and floating point arithmetic ( sqrt(5) is irrational after all), and if so can anyone point me into a direction on how I could modify the fibo_calc_direct function to return a more accurate result?
Thanks!
def fib_calc_recur(n, ii = 0, jj = 1):
#n is the index of the nth fibonacci number, F_n, where F_0 = 0, F_1 = 1, ...
if n == 0: #use recursion
return ii
if n == 1:
return jj
else:
return(fib_calc_recur(n -1, jj, ii + jj))
def fib_calc_direct(n):
a = (1 + np.sqrt(5))/2
b = (1 - np.sqrt(5))/2
f = (1/np.sqrt(5)) * (a**n - b**n)
return(f)
You could make use of Decimal numbers, and set its precision depending on the magninute of n
Not your question, but I'd use an iterative version of the addition method. Here is a script that makes both calculations (naive addition, direct with Decimal) for values of n up to 4000:
def fib_calc_iter(n):
a, b = 0, 1
if n < 2:
return n
for _ in range(1, n):
a, b = b, a + b
return b
from decimal import Decimal, getcontext
def fib_calc_decimal(n):
getcontext().prec = n // 4 + 3 # Choose a precision good enough for this n
sqrt5 = Decimal(5).sqrt()
da = (1 + sqrt5) / 2
db = (1 - sqrt5) / 2
f = (da**n - db**n) / sqrt5
return int(f + Decimal(0.5)) # Round to nearest int
# Test it...
for n in range(1, 4000):
x = fib_calc_iter(n)
y = fib_calc_decimal(n)
if x != y:
print(f"Difference found for n={n}.\nNaive method={x}.\nDecimal method={y}")
break
else:
print("No differences found")
Related
I'm trying to evaluate a Taylor polynomial for the natural logarithm, ln(x), centred at a=1 in Python. I'm using the series given on Wikipedia however when I try a simple calculation like ln(2.7) instead of giving me something close to 1 it gives me a gigantic number. Is there something obvious that I'm doing wrong?
def log(x):
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
Using the Taylor series:
Gives the result:
EDIT: If anyone stumbles across this an alternative way to evaluate the natural logarithm of some real number is to use numerical integration (e.g. Riemann sum, midpoint rule, trapezoid rule, Simpson's rule etc) to evaluate the integral that is often used to define the natural logarithm;
That series is only valid when x is <= 1. For x>1 you will need a different series.
For example this one (found here):
def ln(x): return 2*sum(((x-1)/(x+1))**i/i for i in range(1,100,2))
output:
ln(2.7) # 0.9932517730102833
math.log(2.7) # 0.9932517730102834
Note that it takes a lot more than 100 terms to converge as x gets bigger (up to a point where it'll become impractical)
You can compensate for that by adding the logarithms of smaller factors of x:
def ln(x):
if x > 2: return ln(x/2) + ln(2) # ln(x) = ln(x/2 * 2) = ln(x/2) + ln(2)
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,1000,2))
which is something you can also do in your Taylor based function to support x>1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(2) = -ln(1/2)
n=1000
s=0
for i in range(1,n):
s += ((-1)**(i+1))*((x-1)**i)/i
return s
These series also take more terms to converge when x gets closer to zero so you may want to work them in the other direction as well to keep the actual value to compute between 0.5 and 1:
def log(x):
if x > 1: return log(x/2) - log(0.5) # ln(x/2 * 2) = ln(x/2) + ln(2)
if x < 0.5: return log(2*x) + log(0.5) # ln(x*2 / 2) = ln(x*2) - ln(2)
...
If performance is an issue, you'll want to store ln(2) or log(0.5) somewhere and reuse it instead of computing it on every call
for example:
ln2 = None
def ln(x):
if x <= 2:
return 2*sum(((x-1)/(x+1))**i/i for i in range(1,10000,2))
global ln2
if ln2 is None: ln2 = ln(2)
n2 = 0
while x>2: x,n2 = x/2,n2+1
return ln2*n2 + ln(x)
The program is correct, but the Mercator series has the following caveat:
The series converges to the natural logarithm (shifted by 1) whenever −1 < x ≤ 1.
The series diverges when x > 1, so you shouldn't expect a result close to 1.
The python function math.frexp(x) can be used to advantage here to modify the problem so that the taylor series is working with a value close to one. math.frexp(x) is described as:
Return the mantissa and exponent of x as the pair (m, e). m is a float
and e is an integer such that x == m * 2**e exactly. If x is zero,
returns (0.0, 0), otherwise 0.5 <= abs(m) < 1. This is used to “pick
apart” the internal representation of a float in a portable way.
Using math.frexp(x) should not be regarded as "cheating" because it is presumably implemented just by accessing the bit fields in the underlying binary floating point representation. It isn't absolutely guaranteed that the representation of floats will be IEEE 754 binary64, but as far as I know every platform uses this. sys.float_info can be examined to find out the actual representation details.
Much like the other answer does you can use the standard logarithmic identities as follows: Let m, e = math.frexp(x). Then log(x) = log(m * 2e) = log(m) + e * log(2). log(2) can be precomputed to full precision ahead of time and is just a constant in the program. Here is some code illustrating this to compute the two similar taylor series approximations to log(x). The number of terms in each series was determined by trial and error rather than rigorous analysis.
taylor1 implements log(1 + x) = x1 - (1/2) * x2 + (1/3) * x3 ...
taylor2 implements log(x) = 2 * [t + (1/3) * t3 + (1/5) * t5 ...], where t = (x - 1) / (x + 1).
import math
import struct
_LOG_OF_2 = 0.69314718055994530941723212145817656807550013436025
def taylor1(x):
m, e = math.frexp(x)
log_of_m = 0
num_terms = 36
sign = 1
m_minus1_power = m - 1
for k in range(1, num_terms + 1):
log_of_m += sign * m_minus1_power / k
sign = -sign
m_minus1_power *= m - 1
return log_of_m + e * _LOG_OF_2
def taylor2(x):
m, e = math.frexp(x)
num_terms = 12
half_log_of_m = 0
t = (m - 1) / (m + 1)
t_squared = t * t
t_power = t
denominator = 1
for k in range(num_terms):
half_log_of_m += t_power / denominator
denominator += 2
t_power *= t_squared
return 2 * half_log_of_m + e * _LOG_OF_2
This seems to work well over most of the domain of log(x), but as x approaches 1 (and log(x) approaches 0) the transformation provided by x = m * 2e actually produces a less accurate result. So a better algorithm would first check if x is close to 1, say abs(x-1) < .5, and if so the just compute the taylor series approximation directly on x.
My answer is just using the Taylor series for In(x). I really hope this helps. It is simple and straight to the point.
enter image description here
I am trying to define a function which will approximate pi in python using one of Euler's methods. His formula is as follows:
My code so far is this:
def pi_euler1(n):
numerator = list(range(2 , n))
for i in numerator:
j = 2
while i * j <= numerator[-1]:
if i * j in numerator:
numerator.remove(i * j)
j += 1
for k in numerator:
if (k + 1) % 4 == 0:
denominator = k + 1
else:
denominator = k - 1
#Because all primes are odd, both numbers inbetween them are divisible by 2,
#and by extension 1 of the 2 numbers is divisible by 4
term = numerator / denominator
I know this is wrong, and also incomplete. I'm just not quite sure what the TypeError that I mentioned earlier actually means. I'm just quite stuck with it, I want to create a list of the terms and then find their products. Am I on the right lines?
Update:
I have worked ways around this, fixing the clearly obvious errors that were prevalent thanks to msconi and Johanc, now with the following code:
import math
def pi_euler1(n):
numerator = list(range(2 , 13 + math.ceil(n*(math.log(n)+math.log(math.log(n))))))
denominator=[]
for i in numerator:
j = 2
while i * j <= numerator[-1]:
if (i * j) in numerator:
numerator.remove(i * j)
j += 1
numerator.remove(2)
for k in numerator:
if (k + 1) % 4 == 0:
denominator.append(k+1)
else:
denominator.append(k-1)
a=1
for i in range(n):
a *= numerator[i] / denominator[i]
return 4*a
This seems to work, when I tried to plot a graph of the errors from pi in a semilogy axes scale, I was getting a domain error, but i needed to change the upper bound of the range to n+1 because log(0) is undefined. Thank you guys
Here is the code with some small modifications to get it working:
import math
def pi_euler1(n):
lim = n * n + 4
numerator = list(range(3, lim, 2))
for i in numerator:
j = 3
while i * j <= numerator[-1]:
if i * j in numerator:
numerator.remove(i * j)
j += 2
euler_product = 1
for k in numerator[:n]:
if (k + 1) % 4 == 0:
denominator = k + 1
else:
denominator = k - 1
factor = k / denominator
euler_product *= factor
return euler_product * 4
print(pi_euler1(3))
print(pi_euler1(10000))
print(math.pi)
Output:
3.28125
3.148427801913721
3.141592653589793
Remarks:
You only want the odd primes, so you can start with a list of odd numbers.
j can start with 3 and increment in steps of 2. In fact, j can start at i because all the multiples of i smaller than i*i are already removed earlier.
In general it is very bad practise to remove elements from the list over which you are iterating. See e.g. this post. Internally, Python uses an index into the list over which it iterates. Coincidently, this is not a problem in this specific case, because only numbers larger than the current are removed.
Also, removing elements from a very long list is very slow, as each time the complete list needs to be moved to fill the gap. Therefore, it is better to work with two separate lists.
You didn't calculate the resulting product, nor did you return it.
As you notice, this formula converges very slowly.
As mentioned in the comments, the previous version interpreted n as the limit for highest prime, while in fact n should be the number of primes. I adapted the code to rectify that. In the above version with a crude limit; the version below tries a tighter approximation for the limit.
Here is a reworked version, without removing from the list you're iterating. Instead of removing elements, it just marks them. This is much faster, so a larger n can be used in a reasonable time:
import math
def pi_euler_v3(n):
if n < 3:
lim = 6
else:
lim = n*n
while lim / math.log(lim) / 2 > n:
lim //= 2
print(n, lim)
numerator = list(range(3, lim, 2))
odd_primes = []
for i in numerator:
if i is not None:
odd_primes.append(i)
if len(odd_primes) >= n:
break
j = i
while i * j < lim:
numerator[(i*j-3) // 2] = None
j += 2
if len(odd_primes) != n:
print(f"Wrong limit calculation, only {len(odd_primes)} primes instead of {n}")
euler_product = 1
for k in odd_primes:
denominator = k + 1 if k % 4 == 3 else k - 1
euler_product *= k / denominator
return euler_product * 4
print(pi_euler_v2(100000))
print(math.pi)
Output:
3.141752253548891
3.141592653589793
In term = numerator / denominator you are dividing a list by a number, which doesn't make sense. Divide k by the denominator in the loop in order to use the numerator element for each of the equation's factors one by one. Then you could multiply them repeatedly to the term term *= i / denominator, which you initialize in the beginning as term = 1.
Another issue is the first loop, which won't give you the first n prime numbers. For example, for n=3, list(range(2 , n)) = [2]. Therefore, the only prime you will get is 2.
I am trying to find an efficient way to compute Euler's totient function.
What is wrong with this code? It doesn't seem to be working.
def isPrime(a):
return not ( a < 2 or any(a % i == 0 for i in range(2, int(a ** 0.5) + 1)))
def phi(n):
y = 1
for i in range(2,n+1):
if isPrime(i) is True and n % i == 0 is True:
y = y * (1 - 1/i)
else:
continue
return int(y)
Here's a much faster, working way, based on this description on Wikipedia:
Thus if n is a positive integer, then φ(n) is the number of integers k in the range 1 ≤ k ≤ n for which gcd(n, k) = 1.
I'm not saying this is the fastest or cleanest, but it works.
from math import gcd
def phi(n):
amount = 0
for k in range(1, n + 1):
if gcd(n, k) == 1:
amount += 1
return amount
You have three different problems...
y needs to be equal to n as initial value, not 1
As some have mentioned in the comments, don't use integer division
n % i == 0 is True isn't doing what you think because of Python chaining the comparisons! Even if n % i equals 0 then 0 == 0 is True BUT 0 is True is False! Use parens or just get rid of comparing to True since that isn't necessary anyway.
Fixing those problems,
def phi(n):
y = n
for i in range(2,n+1):
if isPrime(i) and n % i == 0:
y *= 1 - 1.0/i
return int(y)
Calculating gcd for every pair in range is not efficient and does not scales. You don't need to iterate throught all the range, if n is not a prime you can check for prime factors up to its square root, refer to https://stackoverflow.com/a/5811176/3393095.
We must then update phi for every prime by phi = phi*(1 - 1/prime).
def totatives(n):
phi = int(n > 1 and n)
for p in range(2, int(n ** .5) + 1):
if not n % p:
phi -= phi // p
while not n % p:
n //= p
#if n is > 1 it means it is prime
if n > 1: phi -= phi // n
return phi
I'm working on a cryptographic library in python and this is what i'm using. gcd() is Euclid's method for calculating greatest common divisor, and phi() is the totient function.
def gcd(a, b):
while b:
a, b=b, a%b
return a
def phi(a):
b=a-1
c=0
while b:
if not gcd(a,b)-1:
c+=1
b-=1
return c
Most implementations mentioned by other users rely on calling a gcd() or isPrime() function. In the case you are going to use the phi() function many times, it pays of to calculated these values before hand. A way of doing this is by using a so called sieve algorithm.
https://stackoverflow.com/a/18997575/7217653 This answer on stackoverflow provides us with a fast way of finding all primes below a given number.
Oke, now we can replace isPrime() with a search in our array.
Now the actual phi function:
Wikipedia gives us a clear example: https://en.wikipedia.org/wiki/Euler%27s_totient_function#Example
phi(36) = phi(2^2 * 3^2) = 36 * (1- 1/2) * (1- 1/3) = 30 * 1/2 * 2/3 = 12
In words, this says that the distinct prime factors of 36 are 2 and 3; half of the thirty-six integers from 1 to 36 are divisible by 2, leaving eighteen; a third of those are divisible by 3, leaving twelve numbers that are coprime to 36. And indeed there are twelve positive integers that are coprime with 36 and lower than 36: 1, 5, 7, 11, 13, 17, 19, 23, 25, 29, 31, and 35.
TL;DR
With other words: We have to find all the prime factors of our number and then multiply these prime factors together using foreach prime_factor: n *= 1 - 1/prime_factor.
import math
MAX = 10**5
# CREDIT TO https://stackoverflow.com/a/18997575/7217653
def sieve_for_primes_to(n):
size = n//2
sieve = [1]*size
limit = int(n**0.5)
for i in range(1,limit):
if sieve[i]:
val = 2*i+1
tmp = ((size-1) - i)//val
sieve[i+val::val] = [0]*tmp
return [2] + [i*2+1 for i, v in enumerate(sieve) if v and i>0]
PRIMES = sieve_for_primes_to(MAX)
print("Primes generated")
def phi(n):
original_n = n
prime_factors = []
prime_index = 0
while n > 1: # As long as there are more factors to be found
p = PRIMES[prime_index]
if (n % p == 0): # is this prime a factor?
prime_factors.append(p)
while math.ceil(n / p) == math.floor(n / p): # as long as we can devide our current number by this factor and it gives back a integer remove it
n = n // p
prime_index += 1
for v in prime_factors: # Now we have the prime factors, we do the same calculation as wikipedia
original_n *= 1 - (1/v)
return int(original_n)
print(phi(36)) # = phi(2**2 * 3**2) = 36 * (1- 1/2) * (1- 1/3) = 36 * 1/2 * 2/3 = 12
It looks like you're trying to use Euler's product formula, but you're not calculating the number of primes which divide a. You're calculating the number of elements relatively prime to a.
In addition, since 1 and i are both integers, so is the division, in this case you always get 0.
With regards to efficiency, I haven't noticed anyone mention that gcd(k,n)=gcd(n-k,n). Using this fact can save roughly half the work needed for the methods involving the use of the gcd. Just start the count with 2 (because 1/n and (n-1)/k will always be irreducible) and add 2 each time the gcd is one.
Here is a shorter implementation of orlp's answer.
from math import gcd
def phi(n): return sum([gcd(n, k)==1 for k in range(1, n+1)])
As others have already mentioned it leaves room for performance optimization.
Actually to calculate phi(any number say n)
We use the Formula
where p are the prime factors of n.
So, you have few mistakes in your code:
1.y should be equal to n
2. For 1/i actually 1 and i both are integers so their evaluation will also be an integer,thus it will lead to wrong results.
Here is the code with required corrections.
def phi(n):
y = n
for i in range(2,n+1):
if isPrime(i) and n % i == 0 :
y -= y/i
else:
continue
return int(y)
How does one convert a base-10 floating point number in Python to a base-N floating point number?
Specifically in my case, I would like to convert numbers to base 3 (obtain the representation of floating point numbers in base 3), for calculations with the Cantor set.
After a bit of fiddling, here's what I came up with. I present it to you humbly, keeping in mind Ignacio's warning. Please let me know if you find any flaws. Among other things, I have no reason to believe that the precision argument provides anything more than a vague assurance that the first precision digits are pretty close to correct.
def base3int(x):
x = int(x)
exponents = range(int(math.log(x, 3)), -1, -1)
for e in exponents:
d = int(x // (3 ** e))
x -= d * (3 ** e)
yield d
def base3fraction(x, precision=1000):
x = x - int(x)
exponents = range(-1, (-precision - 1) * 2, -1)
for e in exponents:
d = int(x // (3 ** e))
x -= d * (3 ** e)
yield d
if x == 0: break
These are iterators returning ints. Let me know if you need string conversion; but I imagine you can handle that.
EDIT: Actually looking at this some more, it seems like a if x == 0: break line after the yield in base3fraction gives you pretty much arbitrary precision. I went ahead and added that. Still, I'm leaving in the precision argument; it makes sense to be able to limit that quantity.
Also, if you want to convert back to decimal fractions, this is what I used to test the above.
sum(d * (3 ** (-i - 1)) for i, d in enumerate(base3fraction(x)))
Update
For some reason I've felt inspired by this problem. Here's a much more generalized solution. This returns two generators that generate sequences of integers representing the integral and fractional part of a given number in an arbitrary base. Note that this only returns two generators to distinguish between the parts of the number; the algorithm for generating digits is the same in both cases.
def convert_base(x, base=3, precision=None):
length_of_int = int(math.log(x, base))
iexps = range(length_of_int, -1, -1)
if precision == None: fexps = itertools.count(-1, -1)
else: fexps = range(-1, -int(precision + 1), -1)
def cbgen(x, base, exponents):
for e in exponents:
d = int(x // (base ** e))
x -= d * (base ** e)
yield d
if x == 0 and e < 0: break
return cbgen(int(x), base, iexps), cbgen(x - int(x), base, fexps)
Although 8 years have passed, I think it is worthwhile to mention a more compact solution.
def baseConversion( x=1, base=3, decimals=2 ):
import math
n_digits = math.floor(-math.log(x, base))#-no. of digits in front of decimal point
x_newBase = 0#initialize
for i in range( n_digits, decimals+1 ):
x_newBase = x_newBase + int(x*base**i) % base * 10**(-i)
return x_newBase
For example calling the function to convert the number 5+1/9+1/27
def baseConversion( x=5+1/9+1/27, base=3, decimals=2 )
12.01
def baseConversion( x=5+1/9+1/27, base=3, decimals=3 )
12.011
You may try this solution to convert a float string to a given base.
def eval_strint(s, base=2):
assert type(s) is str
assert 2 <= base <= 36
###
### YOUR CODE HERE
###
return int(s,base)
def is_valid_strfrac(s, base=2):
return all([is_valid_strdigit(c, base) for c in s if c != '.']) \
and (len([c for c in s if c == '.']) <= 1)
def eval_strfrac(s, base=2):
assert is_valid_strfrac(s, base), "'{}' contains invalid digits for a base-{} number.".format(s, base)
stg = s.split(".")
float_point=0.0
if len(stg) > 1:
float_point = (eval_strint(stg[1],base) * (base**(-len(stg[1]))))
stg_float = eval_strint(stg[0],base) + float_point
return stg_float
I want to generate the digits of the square root of two to 3 million digits.
I am aware of Newton-Raphson but I don't have much clue how to implement it in C or C++ due to lack of biginteger support. Can somebody point me in the right direction?
Also, if anybody knows how to do it in python (I'm a beginner), I would also appreciate it.
You could try using the mapping:
a/b -> (a+2b)/(a+b) starting with a= 1, b= 1. This converges to sqrt(2) (in fact gives the continued fraction representations of it).
Now the key point: This can be represented as a matrix multiplication (similar to fibonacci)
If a_n and b_n are the nth numbers in the steps then
[1 2] [a_n b_n]T = [a_(n+1) b_(n+1)]T
[1 1]
which now gives us
[1 2]n [a_1 b_1]T = [a_(n+1) b_(n+1)]T
[1 1]
Thus if the 2x2 matrix is A, we need to compute An which can be done by repeated squaring and only uses integer arithmetic (so you don't have to worry about precision issues).
Also note that the a/b you get will always be in reduced form (as gcd(a,b) = gcd(a+2b, a+b)), so if you are thinking of using a fraction class to represent the intermediate results, don't!
Since the nth denominators is like (1+sqrt(2))^n, to get 3 million digits you would likely need to compute till the 3671656th term.
Note, even though you are looking for the ~3.6 millionth term, repeated squaring will allow you to compute the nth term in O(Log n) multiplications and additions.
Also, this can easily be made parallel, unlike the iterative ones like Newton-Raphson etc.
EDIT: I like this version better than the previous. It's a general solution that accepts both integers and decimal fractions; with n = 2 and precision = 100000, it takes about two minutes. Thanks to Paul McGuire for his suggestions & other suggestions welcome!
def sqrt_list(n, precision):
ndigits = [] # break n into list of digits
n_int = int(n)
n_fraction = n - n_int
while n_int: # generate list of digits of integral part
ndigits.append(n_int % 10)
n_int /= 10
if len(ndigits) % 2: ndigits.append(0) # ndigits will be processed in groups of 2
decimal_point_index = len(ndigits) / 2 # remember decimal point position
while n_fraction: # insert digits from fractional part
n_fraction *= 10
ndigits.insert(0, int(n_fraction))
n_fraction -= int(n_fraction)
if len(ndigits) % 2: ndigits.insert(0, 0) # ndigits will be processed in groups of 2
rootlist = []
root = carry = 0 # the algorithm
while root == 0 or (len(rootlist) < precision and (ndigits or carry != 0)):
carry = carry * 100
if ndigits: carry += ndigits.pop() * 10 + ndigits.pop()
x = 9
while (20 * root + x) * x > carry:
x -= 1
carry -= (20 * root + x) * x
root = root * 10 + x
rootlist.append(x)
return rootlist, decimal_point_index
As for arbitrary big numbers you could have a look at The GNU Multiple Precision Arithmetic Library (for C/C++).
For work? Use a library!
For fun? Good for you :)
Write a program to imitate what you would do with pencil and paper. Start with 1 digit, then 2 digits, then 3, ..., ...
Don't worry about Newton or anybody else. Just do it your way.
Here is a short version for calculating the square root of an integer a to digits of precision. It works by finding the integer square root of a after multiplying by 10 raised to the 2 x digits.
def sqroot(a, digits):
a = a * (10**(2*digits))
x_prev = 0
x_next = 1 * (10**digits)
while x_prev != x_next:
x_prev = x_next
x_next = (x_prev + (a // x_prev)) >> 1
return x_next
Just a few caveats.
You'll need to convert the result to a string and add the decimal point at the correct location (if you want the decimal point printed).
Converting a very large integer to a string isn't very fast.
Dividing very large integers isn't very fast (in Python) either.
Depending on the performance of your system, it may take an hour or longer to calculate the square root of 2 to 3 million decimal places.
I haven't proven the loop will always terminate. It may oscillate between two values differing in the last digit. Or it may not.
The nicest way is probably using the continued fraction expansion [1; 2, 2, ...] the square root of two.
def root_two_cf_expansion():
yield 1
while True:
yield 2
def z(a,b,c,d, contfrac):
for x in contfrac:
while a > 0 and b > 0 and c > 0 and d > 0:
t = a // c
t2 = b // d
if not t == t2:
break
yield t
a = (10 * (a - c*t))
b = (10 * (b - d*t))
# continue with same fraction, don't pull new x
a, b = x*a+b, a
c, d = x*c+d, c
for digit in rdigits(a, c):
yield digit
def rdigits(p, q):
while p > 0:
if p > q:
d = p // q
p = p - q * d
else:
d = (10 * p) // q
p = 10 * p - q * d
yield d
def decimal(contfrac):
return z(1,0,0,1,contfrac)
decimal((root_two_cf_expansion()) returns an iterator of all the decimal digits. t1 and t2 in the algorithm are minimum and maximum values of the next digit. When they are equal, we output that digit.
Note that this does not handle certain exceptional cases such as negative numbers in the continued fraction.
(This code is an adaptation of Haskell code for handling continued fractions that has been floating around.)
Well, the following is the code that I wrote. It generated a million digits after the decimal for the square root of 2 in about 60800 seconds for me, but my laptop was sleeping when it was running the program, it should be faster that. You can try to generate 3 million digits, but it might take a couple days to get it.
def sqrt(number,digits_after_decimal=20):
import time
start=time.time()
original_number=number
number=str(number)
list=[]
for a in range(len(number)):
if number[a]=='.':
decimal_point_locaiton=a
break
if a==len(number)-1:
number+='.'
decimal_point_locaiton=a+1
if decimal_point_locaiton/2!=round(decimal_point_locaiton/2):
number='0'+number
decimal_point_locaiton+=1
if len(number)/2!=round(len(number)/2):
number+='0'
number=number[:decimal_point_locaiton]+number[decimal_point_locaiton+1:]
decimal_point_ans=int((decimal_point_locaiton-2)/2)+1
for a in range(0,len(number),2):
if number[a]!='0':
list.append(eval(number[a:a+2]))
else:
try:
list.append(eval(number[a+1]))
except IndexError:
pass
p=0
c=list[0]
x=0
ans=''
for a in range(len(list)):
while c>=(20*p+x)*(x):
x+=1
y=(20*p+x-1)*(x-1)
p=p*10+x-1
ans+=str(x-1)
c-=y
try:
c=c*100+list[a+1]
except IndexError:
c=c*100
while c!=0:
x=0
while c>=(20*p+x)*(x):
x+=1
y=(20*p+x-1)*(x-1)
p=p*10+x-1
ans+=str(x-1)
c-=y
c=c*100
if len(ans)-decimal_point_ans>=digits_after_decimal:
break
ans=ans[:decimal_point_ans]+'.'+ans[decimal_point_ans:]
total=time.time()-start
return ans,total
Python already supports big integers out of the box, and if that's the only thing holding you back in C/C++ you can always write a quick container class yourself.
The only problem you've mentioned is a lack of big integers. If you don't want to use a library for that, then are you looking for help writing such a class?
Here's a more efficient integer square root function (in Python 3.x) that should terminate in all cases. It starts with a number much closer to the square root, so it takes fewer steps. Note that int.bit_length requires Python 3.1+. Error checking left out for brevity.
def isqrt(n):
x = (n >> n.bit_length() // 2) + 1
result = (x + n // x) // 2
while abs(result - x) > 1:
x = result
result = (x + n // x) // 2
while result * result > n:
result -= 1
return result