Python: Float infinite length (Precision float) - python

My code:
def calc_pi(acc):
pos = False
sum = 4.0
for i in range(2, acc):
if not pos:
sum -= 4.0/(2*i-1)
pos = True
else:
sum += 4.0/(2*i-1)
pos = False
return float(sum)
print(calc_pi(5000))
And of course, I'm trying to calculate a pi, with more than 10 after-point-digits. But Python seems to round to 10. Is there a simple way, to prevent it from doing this? Like a million after-point-digits?
Thank you!

You can use the Decimal class provided by the standard library.
From the docs:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')

You can use Chudnovsky algorithm to calculate 100,000,000 decimal places of π. See also related questions 1000-digits-of-pi-in-python and python-pi-calculation.
If you don't want to implement your own algorithm, you can use mpmath package. For approximately 1000000 digits after decimal with the Chudnovsky series:
from mpmath import mp
mp.dps = 1000000 # number of digits
print(mp.pi) # calculate pi to a million digits (takes ~10 seconds)

Python's built-in floating point is typically a 64-bit IEEE754 float (typically called "double").
What you want is something that is not a floating point representation but actually something extensible in (binary) digits, just like python's integer type can grow arbitrarily.
I thus urge you to have a look at fractional integer representation, and doing the maths to represent your number in that.

Related

How to get a very long decimal digit by dividing a irrational number with another number in Python?

import math
from decimal import *
getcontext().prec = 10000
phi = str(Decimal(1+math.sqrt(5))/Decimal (2) )
print(phi)
print(len(phi))
#output >> phi - 1.6180339887498949025257388711906969547271728515625
#output >> len(phi) - 51
I wanted to get a 1000 digits long number but in Python, the limit seems to be at 51. So how can I do to get a very long digit in Python?
The math module works with native binary floating-point arithmetic. So the crucial
1+math.sqrt(5)
part has nothing to do with the decimal precision you set. That part is done entirely in native binary floating-point.
To keep the output to reasonable length, let's try for 80 significant decimal digits instead. So
getcontext().prec = 80
and then
phi = str((1 + Decimal(5).sqrt()) / 2)
instead. Clear? Now the sqrt method of a decimal object is being used instead, and that will respect the decimal precision you set. You can force 1 and 2 to Decimal too but there's no real need to - they'll be converted automatically to Decimal because their other operand is Decimal. Output from the above:
1.6180339887498948482045868343656381177203091798057628621354486227052604628189024
81

How does print "%.1f" % number really works in Python? [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).

Decimal precision in python without decimal module

I'm fairly new to Python, and was wondering how would I be able to control the decimal precision of any given number without using any the decimal module or floating points (eg: " %4f" %n).
Examples (edit):
input(2/7)
0.28571428571....
input(1/3)
0.33333333333333....
and I wanted them to thousand decimal points or any decimal point for that matter. I was thinking of using a while as a controlled loop, but I'm not really sure how to do so. Thanks
edit: The reason why I'm not using the decimal module is just so I can conceptualize the algorithm/logic behind these type of things. Just trying to really understand the logic behind things.
We can use a long to store a decimal with high precision, and do arithmetic on it. Here's how you'd print it out:
def print_decimal(val, prec):
intp, fracp = divmod(val, 10**prec)
print str(intp) + '.' + str(fracp).zfill(prec)
Usage:
>>> prec = 1000
>>> a = 2 * 10**prec
>>> b = a//7
>>> print_decimal(b, prec)
0.2857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857142857
Without the Decimal module (why, though?), assuming Python 3:
def divide(num, den, prec):
a = (num*10**prec) // den
s = str(a).zfill(prec+1)
return s[0:-prec] + "." + s[-prec:]
Thanks to #nneonneo for the clever .zfill() idea!
>>> divide(2,7,1000)
'0.28571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
71428571428571428571428571428571428571428571428571428571428571428571428571428571
42857142857142857142857142857142857142857142857142857142857142857142857142857142
85714285714285714285714285714285714285714285714285714285714285714285714285714285
7142857142857142857142857142857142857142857'
Caveat: This uses floor division, so divide(2,3,2) will give you 0.66 instead of 0.67.
While the other answers use very large values to handle the precision, this implements long division.
def divide(num, denom, prec=30, return_remainder=False):
"long divison"
remain=lim=0
digits=[]
#whole part
for i in str(num):
d=0;remain*=10
remain+=int(i)
while denom*d<=remain:d+=1
if denom*d>remain:d-=1
remain-=denom*d
digits.append(d)
#fractional part
if remain:digits.append('.')
while remain and lim<prec:
d=0;remain*=10
while denom*d<=remain:d+=1
if denom*d>remain:d-=1
remain-=denom*d
digits.append(d)
lim+=1
#trim leading zeros
while digits[0]==0 and digits[1]!='.':
digits=digits[1:]
quotient = ''.join(list(map(str,digits)))
if return_remainder:
return (quotient, remain)
else:
return quotient
Because it's the division algorithm, every digit will be correct and you can get the remainder (unlike floor division which won't have the remainder). The precision here I've implemented as the number of digits after the decimal sign.
>>> divide(2,7,70)
'0.2857142857142857142857142857142857142857142857142857142857142857142857'
>>> divide(2,7,70,True)
('0.2857142857142857142857142857142857142857142857142857142857142857142857', 1)

Arbitrary precision of square roots

I was quite disappointed when decimal.Decimal(math.sqrt(2)) yielded
Decimal('1.4142135623730951454746218587388284504413604736328125')
and the digits after the 15th decimal place turned out wrong. (Despite happily giving you much more than 15 digits!)
How can I get the first m correct digits in the decimal expansion of sqrt(n) in Python?
Use the sqrt method on Decimal
>>> from decimal import *
>>> getcontext().prec = 100 # Change the precision
>>> Decimal(2).sqrt()
Decimal('1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573')
You can try bigfloat. Example from the project page:
from bigfloat import *
sqrt(2, precision(100)) # compute sqrt(2) with 100 bits of precision
IEEE standard double precision floating point numbers only have 16 digits of precision. Any software/hardware that uses IEEE cannot do better:
http://en.wikipedia.org/wiki/IEEE_754-2008
You'd need a special BigDecimal class implementation, with all math functions implemented to use it. Python has such a thing:
https://literateprograms.org/arbitrary-precision_elementary_mathematical_functions__python_.html
How can I get the first m correct digits in the decimal expansion of sqrt(n) in Python?
One way is to calculate integer square root of the number multiplied by required power of 10. For example, to see the first 20 decimal places of sqrt(2), you can do:
>>> from gmpy2 import isqrt
>>> num = 2
>>> prec = 20
>>> isqrt(num * 10**(2*prec)))
mpz(141421356237309504880)
The isqrt function is actually quite easy to implement yourself using the algorithm provided on the Wikipedia page.

Python rounding error with float numbers [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).

Categories