Arbitrary precision of square roots - python

I was quite disappointed when decimal.Decimal(math.sqrt(2)) yielded
Decimal('1.4142135623730951454746218587388284504413604736328125')
and the digits after the 15th decimal place turned out wrong. (Despite happily giving you much more than 15 digits!)
How can I get the first m correct digits in the decimal expansion of sqrt(n) in Python?

Use the sqrt method on Decimal
>>> from decimal import *
>>> getcontext().prec = 100 # Change the precision
>>> Decimal(2).sqrt()
Decimal('1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573')

You can try bigfloat. Example from the project page:
from bigfloat import *
sqrt(2, precision(100)) # compute sqrt(2) with 100 bits of precision

IEEE standard double precision floating point numbers only have 16 digits of precision. Any software/hardware that uses IEEE cannot do better:
http://en.wikipedia.org/wiki/IEEE_754-2008
You'd need a special BigDecimal class implementation, with all math functions implemented to use it. Python has such a thing:
https://literateprograms.org/arbitrary-precision_elementary_mathematical_functions__python_.html

How can I get the first m correct digits in the decimal expansion of sqrt(n) in Python?
One way is to calculate integer square root of the number multiplied by required power of 10. For example, to see the first 20 decimal places of sqrt(2), you can do:
>>> from gmpy2 import isqrt
>>> num = 2
>>> prec = 20
>>> isqrt(num * 10**(2*prec)))
mpz(141421356237309504880)
The isqrt function is actually quite easy to implement yourself using the algorithm provided on the Wikipedia page.

Related

How to get a very long decimal digit by dividing a irrational number with another number in Python?

import math
from decimal import *
getcontext().prec = 10000
phi = str(Decimal(1+math.sqrt(5))/Decimal (2) )
print(phi)
print(len(phi))
#output >> phi - 1.6180339887498949025257388711906969547271728515625
#output >> len(phi) - 51
I wanted to get a 1000 digits long number but in Python, the limit seems to be at 51. So how can I do to get a very long digit in Python?
The math module works with native binary floating-point arithmetic. So the crucial
1+math.sqrt(5)
part has nothing to do with the decimal precision you set. That part is done entirely in native binary floating-point.
To keep the output to reasonable length, let's try for 80 significant decimal digits instead. So
getcontext().prec = 80
and then
phi = str((1 + Decimal(5).sqrt()) / 2)
instead. Clear? Now the sqrt method of a decimal object is being used instead, and that will respect the decimal precision you set. You can force 1 and 2 to Decimal too but there's no real need to - they'll be converted automatically to Decimal because their other operand is Decimal. Output from the above:
1.6180339887498948482045868343656381177203091798057628621354486227052604628189024
81

SymPy rounding behaviour

I was investigating different rounding method using Python built-in solution and some other external libraries such SymPy and while doing so I stumbled upon some cases that I need help with understanding the reason behind it.
Ex-1:
print(round(1.0065,3))
output:
1.006
In the first case, using the Python built-in rounding function the output was 1.006 instead of 1.007 and I can understand that this is not a mistake as Python rounds to the nearest even and that's known as Bankers rounding.
And this is why I from the beginning started searching for another way to control the rounding behaviour. With a quick search, I've found decimal.Decimal module which can easily handle decimal values and efficiently round is using quantize() as in this example:
from decimal import Decimal, getcontext, ROUND_HALF_UP
context= getcontext()
context.rounding='ROUND_HALF_UP'
print(Decimal('1.0065').quantize(Decimal('.001')))
output:1.007
This is a very good solution but the only problem is it is not easy to be hardcoded in long math expressions as I'll need to convert every number to string then after using decimal I will pass it the precession as in the form of "0.001" instead of writing '3' directly as in the case of built-in round.
While searching for another solution I found that SymPy, which I already use a lot in my scripts, offers some very powerful functions that might help but when I tried it the output was not as I expected.
Ex-1 using SymPy sympify():
print(sympify(1.0065).evalf(3))
output: 1.01
Ex-2 using SymPy N (normalize):
print(N(1.0065,3))
output: 1.01
Af first the output was a little bit weird but after investigating I realized that N and sympify already performing round right but rounding to significant figures, not to decimal places.
And here the questions come:
As I can use with Decimal objects getcontext().rounding='ROUND_HALF_UP' to change the rounding behaviour, is there a way to change the N and sympify rounding behaviour to decimal places instead of significant figures?
Instead of re-implementing decimal rounding in SymPy, perhaps use decimal to do the rounding, but hide the calculation in a utility function:
import sympy as sym
import decimal
from decimal import Decimal as D
def dround(d, ndigits, rounding=decimal.ROUND_HALF_UP):
result = D(str(d)).quantize(D('0.1')**ndigits, rounding=rounding)
# result = sym.sympify(result) # if you want a SymPy Float
return result
for x in [0.0065, 1.0065, 10.0065, 100.0065]:
print(dround(x, 3))
prints
0.007
1.007
10.007
100.007
The n of evalf gives the first n significant digits of x (measured from the left). If you use x.round(3) it will round x to the nth digit from the decimal point and can be positive (right of decimal pt) or negative (left of decimal pt).
>>> for x in '0.0065, 1.0065, 10.0065, 100.0065'.split(', '):
... print S(x).round(3)
0.006
1.006
10.007
100.007
>>> int(S(12345).round(-2))
12300
First of all, N and evalf are essentially the same thing; N(x, n) amounts to sympify(x).evalf(n). In your case, since x is a Python float, it's easier to use N because it sympifies the input.
To get three digits after decimal dot, use N(x, 3 + log(x, 10) + 1). The adjustment log(x, 10) + 1 is 0 when x is between 0.1 and 1; in this case the number of significant digits is the same as the number of digits after the decimal dot. If x is larger, we get more significant digits.
Example:
for x in [0.0065, 1.0065, 10.0065, 100.0065]:
print(N(x, 3 + log(x, 10) + 1))
prints
0.006
1.007
10.007
100.007
The transition from 6 to 7 is curious, but not entirely surprising. These numbers are not exactly represented in binary system, so the truncation to nearest double-precision float may be a factor here. I've made a few additional observation on this effect on my blog.

Python: Float infinite length (Precision float)

My code:
def calc_pi(acc):
pos = False
sum = 4.0
for i in range(2, acc):
if not pos:
sum -= 4.0/(2*i-1)
pos = True
else:
sum += 4.0/(2*i-1)
pos = False
return float(sum)
print(calc_pi(5000))
And of course, I'm trying to calculate a pi, with more than 10 after-point-digits. But Python seems to round to 10. Is there a simple way, to prevent it from doing this? Like a million after-point-digits?
Thank you!
You can use the Decimal class provided by the standard library.
From the docs:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You can use Chudnovsky algorithm to calculate 100,000,000 decimal places of π. See also related questions 1000-digits-of-pi-in-python and python-pi-calculation.
If you don't want to implement your own algorithm, you can use mpmath package. For approximately 1000000 digits after decimal with the Chudnovsky series:
from mpmath import mp
mp.dps = 1000000 # number of digits
print(mp.pi) # calculate pi to a million digits (takes ~10 seconds)
Python's built-in floating point is typically a 64-bit IEEE754 float (typically called "double").
What you want is something that is not a floating point representation but actually something extensible in (binary) digits, just like python's integer type can grow arbitrarily.
I thus urge you to have a look at fractional integer representation, and doing the maths to represent your number in that.

Decimal precision in python without decimal module

I'm fairly new to Python, and was wondering how would I be able to control the decimal precision of any given number without using any the decimal module or floating points (eg: " %4f" %n).
Examples (edit):
input(2/7)
0.28571428571....
input(1/3)
0.33333333333333....
and I wanted them to thousand decimal points or any decimal point for that matter. I was thinking of using a while as a controlled loop, but I'm not really sure how to do so. Thanks
edit: The reason why I'm not using the decimal module is just so I can conceptualize the algorithm/logic behind these type of things. Just trying to really understand the logic behind things.
We can use a long to store a decimal with high precision, and do arithmetic on it. Here's how you'd print it out:
def print_decimal(val, prec):
intp, fracp = divmod(val, 10**prec)
print str(intp) + '.' + str(fracp).zfill(prec)
Usage:
>>> prec = 1000
>>> a = 2 * 10**prec
>>> b = a//7
>>> print_decimal(b, prec)
0.2857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857
Without the Decimal module (why, though?), assuming Python 3:
def divide(num, den, prec):
a = (num*10**prec) // den
s = str(a).zfill(prec+1)
return s[0:-prec] + "." + s[-prec:]
Thanks to #nneonneo for the clever .zfill() idea!
>>> divide(2,7,1000)
'0.28571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
7142857142857142857142857142857142857142857'
Caveat: This uses floor division, so divide(2,3,2) will give you 0.66 instead of 0.67.
While the other answers use very large values to handle the precision, this implements long division.
def divide(num, denom, prec=30, return_remainder=False):
"long divison"
remain=lim=0
digits=[]
#whole part
for i in str(num):
d=0;remain*=10
remain+=int(i)
while denom*d<=remain:d+=1
if denom*d>remain:d-=1
remain-=denom*d
digits.append(d)
#fractional part
if remain:digits.append('.')
while remain and lim<prec:
d=0;remain*=10
while denom*d<=remain:d+=1
if denom*d>remain:d-=1
remain-=denom*d
digits.append(d)
lim+=1
#trim leading zeros
while digits[0]==0 and digits[1]!='.':
digits=digits[1:]
quotient = ''.join(list(map(str,digits)))
if return_remainder:
return (quotient, remain)
else:
return quotient
Because it's the division algorithm, every digit will be correct and you can get the remainder (unlike floor division which won't have the remainder). The precision here I've implemented as the number of digits after the decimal sign.
>>> divide(2,7,70)
'0.2857142857142857142857142857142857142857142857142857142857142857142857'
>>> divide(2,7,70,True)
('0.2857142857142857142857142857142857142857142857142857142857142857142857', 1)

Why does str() round up floats?

The built-in Python str() function outputs some weird results when passing in floats with many decimals. This is what happens:
>>> str(19.9999999999999999)
>>> '20.0'
I'm expecting to get:
>>> '19.9999999999999999'
Does anyone know why? and maybe workaround it?
Thanks!
It's not str() that rounds, it's the fact that you're using floats in the first place. Float types are fast, but have limited precision; in other words, they are imprecise by design. This applies to all programming languages. For more details on float quirks, please read "What Every Programmer Should Know About Floating-Point Arithmetic"
If you want to store and operate on precise numbers, use the decimal module:
>>> from decimal import Decimal
>>> str(Decimal('19.9999999999999999'))
'19.9999999999999999'
A float has 32 bits (in C at least). One of those bits is allocated for the sign, a few allocated for the mantissa, and a few allocated for the exponent. You can't fit every single decimal to an infinite number of digits into 32 bits. Therefore floating point numbers are heavily based on rounding.
If you try str(19.998), it will probably give you something at least close to 19.998 because 32 bits have enough precision to estimate that, but something like 19.999999999999999 is too precise to estimate in 32 bits, so it rounds to the nearest possible value, which happens to be 20.
Please note that this is a problem of understanding floating point (fixed-length) numbers. Most languages do exactly (or very similar to) what Python does.
Python float is IEEE 754 64-bit binary floating point. It is limited to 53 bits of precision i.e. slightly less than 16 decimal digits of precision. 19.9999999999999999 contains 18 decimal digits; it cannot be represented exactly as a float. float("19.9999999999999999") produces the nearest floating point value, which happens to be the same as float("20.0").
>>> float("19.9999999999999999") == float("20.0")
True
If by "many decimals" you mean "many digits after the decimal point", please be aware that the same "weird" results happen when there are many decimal digits before the decimal point:
>>> float("199999999999999999")
2e+17
If you want the full float precision, don't use str(), use repr():
>>> x = 1. / 3.
>>> str(x)
'0.333333333333'
>>> str(x).count('3')
12
>>> repr(x)
'0.3333333333333333'
>>> repr(x).count('3')
16
>>>
Update It's interesting how often decimal is prescribed as a cure-all for float-induced astonishment. This is often accompanied by simple examples like 0.1 + 0.1 + 0.1 != 0.3. Nobody stops to point out that decimal has its share of deficiencies e.g.
>>> (1.0 / 3.0) * 3.0
1.0
>>> (Decimal('1.0') / Decimal('3.0')) * Decimal('3.0')
Decimal('0.9999999999999999999999999999')
>>>
True, float is limited to 53 binary digits of precision. By default, decimal is limited to 28 decimal digits of precision.
>>> Decimal(2) / Decimal(3)
Decimal('0.6666666666666666666666666667')
>>>
You can change the limit, but it's still limited precision. You still need to know the characteristics of the number format to use it effectively without "astonishing" results, and the extra precision is bought by slower operation (unless you use the 3rd-party cdecimal module).
For any given binary floating point number, there is an infinite set of decimal fractions that, on input, round to that number. Python's str goes to some trouble to produce the shortest decimal fraction from this set; see GLS's paper http://kurtstephens.com/files/p372-steele.pdf for the general algorithm (IIRC they use a refinement that avoids arbitrary-precision math in most cases). You happened to input a decimal fraction that rounds to a float (IEEE double) whose shortest possible decimal fraction is not the same as the one you entered.

Categories