Python decimal precision and rounding - python

I want to limit the precision of my float and use it as an exact value for next operations, but I'm doing something wrong with decimal module.
from decimal import *
factor = 350/float(255)
# Settings
getcontext().prec = 1
getcontext().rounding = ROUND_UP
print Decimal(factor)
Python output:
1.372549019607843145962533526471816003322601318359375
Expected and wanted result: 1.4

From Python Documentation: (https://docs.python.org/2/library/decimal.html)
The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations.
getcontext().prec = 6
Decimal('3.0')
Decimal('3.0')
Decimal('3.1415926535')
Decimal('3.1415926535')
Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85987')
getcontext().rounding = ROUND_UP
Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85988')
so you need to use decimal in operation, not when printing result.

Related

Python Decimal module not rounding as expected

I am trying to use the Decimal module to do some FX calculations (instead of using floats).
However when I do the following, I do not get the expected value output:
>>> from decimal import Decimal
>>> x = Decimal(1.3755)
>>> y = Decimal(1.2627)
>>> z = y/(1/x)
>>> print(z)
1.736843849999999839084452447
>>>
The output should be: 1.73684385
I thought using Decimals would correct this issue. How can I resolve this rounding issue?
from decimal import Decimal
x = Decimal('1.3755')
y = Decimal('1.2627')
print(y/(1/x))
# 1.736843850000000000000000000
Use str to construct Decimal instead of real number.
String decimal objects are required:
>>> from decimal import Decimal
>>> x = Decimal('1.3755')
>>> y = Decimal('1.2627')
>>> z = y/(1/x)
>>> print(z)
1.736843850000000000000000000
>>>
Otherwise they would still be treated as regular python floats.
As #schwobaseggl mentioned:
Note that the float is passed to the Decimal constructor, so the precision is lost there already. The Decimal constructor has no chance of knowing the desired precision. The result is still a decimal object, but with the precision of the passed float.
you can use 0:.nf to round the numbers like below
:
print(z)
# 1.736843849999999839084452447
print("{0:.8f}".format(z))
# 1.73684385
try this, this will helps you, set your precision level
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')

Changing Str to Float Loses Digits

I have a program where I get numbers like: 0.18869952857494354248046875
I need to change these from string to float, but when I do that, I get: 0.18869952857494354
Why does python cut off ~10 digits and how can I retain them in the transfer?
The limitations of floating point math mean that you can't have arbitrarily long floating point numbers: https://docs.python.org/3/tutorial/floatingpoint.html
You can get around this by using the decimal library (https://docs.python.org/3/library/decimal.html#module-decimal):
>>> from decimal import Decimal
>>> a = Decimal("0.18869952857494354248046875")
>>> a
Decimal('0.18869952857494354248046875')
>>> b = Decimal("0.111111111111111111111111")
>>> a + b
Decimal('0.29981063968605465359157975')
>>> a * b
Decimal('0.02096661428610483805338539570')
By default the decimal library has 28 places of precision but you can change this using getcontext().prec:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
>>> getcontext().prec = 60
>>> Decimal(1) / Decimal(7)
Decimal('0.142857142857142857142857142857142857142857142857142857142857')
This happens because float datatype in python can show upto 16 digits precision.
For higher precision (to retain all digits) , use special packages like : mpmath
You can use it like:
from mpmath import *
mp.dps = 64 #64 decimal places
sqrt(5)/2 #64 digit, high precision

Float division of big numbers in python

I have two big numbers a and b of length around 10000, such that a <= b.
Now, I have to find c = a / b, upto 10 places of decimal, how do I do it without loosing precision?
The decimal module should work. As seen in TigerhawkT3's link, you can choose the number of decimal places your quotient should be.
from decimal import *
getcontext().prec = 6
a = float(raw_input('The first number:')) #Can be int() if needed
b = float(raw_input('The second number:'))
c = Decimal(a) / Decimal(b)
print float(c)
You could use the decimal module:
from decimal import localcontext, Decimal
def foo(a, b):
with localcontext() as ctx:
ctx.prec = 10 # Sets precision to 10 places temporarily
c = Decimal(a) / Decimal(b) # Not sure if this is precise if a and b are floats,
# str(a) and str(b) instead'd ensure precision i think.
return float(c)
you can calculate any type of length float number with this function
def longdiv(divisor,divident):
quotient,remainder=divmod(divisor,divident)
return (str(quotient)+str(remainder*1.0/divident)[1:])

Clarification on the Decimal type in Python

Everybody knows, or at least, every programmer should know, that using the float type could lead to precision errors. However, in some cases, an exact solution would be great and there are cases where comparing using an epsilon value is not enough. Anyway, that's not really the point.
I knew about the Decimal type in Python but never tried to use it. It states that "Decimal numbers can be represented exactly" and I thought that it meant a clever implementation that allows to represent any real number. My first try was:
>>> from decimal import Decimal
>>> d = Decimal(1) / Decimal(3)
>>> d3 = d * Decimal(3)
>>> d3 < Decimal(1)
True
Quite disappointed, I went back to the documentation and kept reading:
The context for arithmetic is an environment specifying precision [...]
OK, so there is actually a precision. And the classic issues can be reproduced:
>>> dd = d * 10**20
>>> dd
Decimal('33333333333333333333.33333333')
>>> for i in range(10000):
... dd += 1 / Decimal(10**10)
>>> dd
Decimal('33333333333333333333.33333333')
So, my question is: is there a way to have a Decimal type with an infinite precision? If not, what's the more elegant way of comparing 2 decimal numbers (e.g. d3 < 1 should return False if the delta is less than the precision).
Currently, when I only do divisions and multiplications, I use the Fraction type:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
Is it the best approach? What could be the other options?
The Decimal class is best for financial type addition, subtraction multiplication, division type problems:
>>> (1.1+2.2-3.3)*10000000000000000000
4440.892098500626 # relevant for government invoices...
>>> import decimal
>>> D=decimal.Decimal
>>> (D('1.1')+D('2.2')-D('3.3'))*10000000000000000000
Decimal('0.0')
The Fraction module works well with the rational number problem domain you describe:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
For pure multi precision floating point for scientific work, consider mpmath.
If your problem can be held to the symbolic realm, consider sympy. Here is how you would handle the 1/3 issue:
>>> sympy.sympify('1/3')*3
1
>>> (sympy.sympify('1/3')*3) == 1
True
Sympy uses mpmath for arbitrary precision floating point, includes the ability to handle rational numbers and irrational numbers symbolically.
Consider the pure floating point representation of the irrational value of √2:
>>> math.sqrt(2)
1.4142135623730951
>>> math.sqrt(2)*math.sqrt(2)
2.0000000000000004
>>> math.sqrt(2)*math.sqrt(2)==2
False
Compare to sympy:
>>> sympy.sqrt(2)
sqrt(2) # treated symbolically
>>> sympy.sqrt(2)*sympy.sqrt(2)==2
True
You can also reduce values:
>>> import sympy
>>> sympy.sqrt(8)
2*sqrt(2) # √8 == √(4 x 2) == 2*√2...
However, you can see issues with Sympy similar to straight floating point if not careful:
>>> 1.1+2.2-3.3
4.440892098500626e-16
>>> sympy.sympify('1.1+2.2-3.3')
4.44089209850063e-16 # :-(
This is better done with Decimal:
>>> D('1.1')+D('2.2')-D('3.3')
Decimal('0.0')
Or using Fractions or Sympy and keeping values such as 1.1 as ratios:
>>> sympy.sympify('11/10+22/10-33/10')==0
True
>>> Fraction('1.1')+Fraction('2.2')-Fraction('3.3')==0
True
Or use Rational in sympy:
>>> frac=sympy.Rational
>>> frac('1.1')+frac('2.2')-frac('3.3')==0
True
>>> frac('1/3')*3
1
You can play with sympy live.
So, my question is: is there a way to have a Decimal type with an infinite precision?
No, since storing an irrational number would require infinite memory.
Where Decimal is useful is representing things like monetary amounts, where the values need to be exact and the precision is known a priori.
From the question, it is not entirely clear that Decimal is more appropriate for your use case than float.
is there a way to have a Decimal type with an infinite precision?
No; for any non-empty interval on the real line, you cannot represent all the numbers in the set with infinite precision using a finite number of bits. This is why Fraction is useful, as it stores the numerator and denominator as integers, which can be represented precisely:
>>> Fraction("1.25")
Fraction(5, 4)
If you are new to Decimal, this post is relevant: Python floating point arbitrary precision available?
The essential idea from the answers and comments is that for computationally tough problems where precision is needed, you should use the mpmath module https://code.google.com/p/mpmath/. An important observation is that,
The problem with using Decimal numbers is that you can't do much in the way of math functions on Decimal objects
Just to point out something that might not be immediately obvious to everyone:
The documentation for the decimal module says
... The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero.
(Also see the classic: Is floating point math broken?)
However, if we use decimal.Decimal naively, we get the same "unexpected" result
>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) == Decimal(0.3)
False
The problem in the naive example above is the use of float arguments, which are "losslessly converted to [their] exact decimal equivalent," as explained in the docs.
The trick (implicit in the accepted answer) is to construct the Decimal instances using e.g. strings, instead of floats
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') == Decimal('0.3')
True
or, perhaps more convenient in some cases, using tuples (<sign>, <digits>, <exponent>)
>>> Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) == Decimal((0, (3,), -1))
True
Note: this does not answer the original question, but it is closely related, and may be of help to people who end up here based on the question title.

Convert float to rounded decimal equivalent

When you convert a float to Decimal, the Decimal will contain as accurate a representation of the binary number that it can. It's nice to be accurate, but it isn't always what you want. Since many decimal numbers can't be represented exactly in binary, the resulting Decimal will be a little off - sometimes a little high, sometimes a little low.
>>> from decimal import Decimal
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print Decimal(f)
0.1000000000000000055511151231257827021181583404541015625
0.299999999999999988897769753748434595763683319091796875
10000000000000000905969664
9999999999999999583119736832
1.000000000000099920072216264088638126850128173828125
Ideally we'd like the Decimal to be rounded to the most likely decimal equivalent.
I tried converting to str since a Decimal created from a string will be exact. Unfortunately str rounds a little too much.
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print Decimal(str(f))
0.1
0.3
1E+25
1E+28
1.0
Is there a way of getting a nicely rounded Decimal from a float?
It turns out that repr does a better job of converting a float to a string than str does. It's the quick-and-easy way to do the conversion.
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print Decimal(repr(f))
0.1
0.3
1E+25
1E+28
1.0000000000001
Before I discovered that, I came up with a brute-force way of doing the rounding. It has the advantage of recognizing that large numbers are accurate to 15 digits - the repr method above only recognizes one significant digit for the 1e25 and 1e28 examples.
from decimal import Decimal,DecimalTuple
def _increment(digits, exponent):
new_digits = [0] + list(digits)
new_digits[-1] += 1
for i in range(len(new_digits)-1, 0, -1):
if new_digits[i] > 9:
new_digits[i] -= 10
new_digits[i-1] += 1
if new_digits[0]:
return tuple(new_digits[:-1]), exponent + 1
return tuple(new_digits[1:]), exponent
def nearest_decimal(f):
sign, digits, exponent = Decimal(f).as_tuple()
if len(digits) > 15:
round_up = digits[15] >= 5
exponent += len(digits) - 15
digits = digits[:15]
if round_up:
digits, exponent = _increment(digits, exponent)
while digits and digits[-1] == 0 and exponent < 0:
digits = digits[:-1]
exponent += 1
return Decimal(DecimalTuple(sign, digits, exponent))
>>> for f in (0.1, 0.3, 1e25, 1e28, 1.0000000000001):
print nearest_decimal(f)
0.1
0.3
1.00000000000000E+25
1.00000000000000E+28
1.0000000000001
Edit: I discovered one more reason to use the brute-force rounding. repr tries to return a string that uniquely identifies the underlying float bit representation, but it doesn't necessarily ensure the accuracy of the last digit. By using one less digit, my rounding function will more often be the number you would expect.
>>> print Decimal(repr(2.0/3.0))
0.6666666666666666
>>> print dec.nearest_decimal(2.0/3.0)
0.666666666666667
The decimal created with repr is actually more accurate, but it implies a level of precision that doesn't exist. The nearest_decimal function delivers a better match between precision and accuracy.
I have implemented this in Pharo Smalltalk, in a Float method named asMinimalDecimalFraction.
It is exactly the same problem as printing the shortest decimal fraction that would be re-interpreted as the same float/double, assuming correct rounding (to nearest).
See my answer at Count number of digits after `.` in floating point numbers? for more references

Categories