Decimal fixed precision - python

I would like to use decimal in currency calculations, so I would like to work on exactly two numbers after comma. Initially I thought that prec of decimal's context refers to that property, but after a few experiments I feel a bit confused.
Experiment #1:
In [1]: import decimal
In [2]: decimal.getcontext().prec = 2
In [3]: a = decimal.Decimal('159.9')
In [4]: a
Out[4]: Decimal('159.9')
In [5]: b = decimal.Decimal('200')
In [6]: b
Out[6]: Decimal('200')
In [7]: b - a
Out[7]: Decimal('40')
Experiment #2:
In [8]: decimal.getcontext().prec = 4
In [9]: a = decimal.Decimal('159.9')
In [10]: a
Out[10]: Decimal('159.9')
In [11]: b = decimal.Decimal('200')
In [12]: b
Out[12]: Decimal('200')
In [13]: b - a
Out[13]: Decimal('40.1')
Experiment #3: (prec is still set to 4)
In [14]: a = decimal.Decimal('159999.9')
In [15]: a
Out[15]: Decimal('159999.9')
In [16]: b = decimal.Decimal('200000')
In [17]: b
Out[17]: Decimal('200000')
In [18]: b - a
Out[18]: Decimal('4.000E+4')
Why does it work like in my examples? How should I work with decimals in my (currency calculations) case?

The precision sets the number of significant digits, which is not equivalent to the number of digits after the decimal point.
So if you have a precision of 2, you'll have two significant digits, so a number with 3 significant digits like 40.1 will be reduced to the upper two significant digits giving 40.
There's no easy way to set the number of digits after the decimal point with Decimal. However you could use a high precision and always round your results to two decimals:
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 60 # use a higher/lower one if needed
>>> Decimal('200') - Decimal('159.9')
Decimal('40.1')
>>> r = Decimal('200') - Decimal('159.9')
>>> round(r, 2)
Decimal('40.10')
The decimal FAQ also includes a similar question and answer (using quantize):
Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used?
A. The quantize() method rounds to a fixed number of decimal places. If the Inexact trap is set, it is also useful for validation:
>>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01')
>>> # Round to two places
>>> Decimal('3.214').quantize(TWOPLACES)
Decimal('3.21')
>>> # Validate that a number does not exceed two places
>>> Decimal('3.21').quantize(TWOPLACES, context=Context(traps=[Inexact]))
Decimal('3.21')
>>> Decimal('3.214').quantize(TWOPLACES, context=Context(traps=[Inexact]))
Traceback (most recent call last):
...
Inexact: None
Q. Once I have valid two place inputs, how do I maintain that invariant throughout an application?
A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to be followed-up with a quantize() step:
>>> a = Decimal('102.72') # Initial fixed-point values
>>> b = Decimal('3.17')
>>> a + b # Addition preserves fixed-point
Decimal('105.89')
>>> a - b
Decimal('99.55')
>>> a * 42 # So does integer multiplication
Decimal('4314.24')
>>> (a * b).quantize(TWOPLACES) # Must quantize non-integer multiplication
Decimal('325.62')
>>> (b / a).quantize(TWOPLACES) # And quantize division
Decimal('0.03')
In developing fixed-point applications, it is convenient to define functions to handle the quantize() step:
>>> def mul(x, y, fp=TWOPLACES):
... return (x * y).quantize(fp)
>>> def div(x, y, fp=TWOPLACES):
... return (x / y).quantize(fp)
>>> mul(a, b) # Automatically preserve fixed-point
Decimal('325.62')
>>> div(b, a)
Decimal('0.03')

The best method I've found is to set a high prec and use Decimal.quantize to round your result:
decimal.getcontext().prec=100
a = Decimal('400000.123456789')
b = Decimal('200000.0')
a-b
>>> Decimal('2.0000E+5')
(a-b).quantize(Decimal('0.01'))
>>> Decimal('200000.12')

Related

Comparing two variables upto x decimal places

Let a = 1.11114
b = 1.11118
When I compare these two variables using the code below
if b <= a
I want the comparison to be done only up to 4 decimal places, such that a = b.
Can anyone help me with an efficient code?
Thank you!
To avoid rounding, you can multiply the number by power of 10 to cast to integer up to the decimal place you want to consider (to truncate the decimal part), and then divide by the same power to obtain the truncated float:
n = 4 # number of decimal digits you want to consider
a_truncated = int(a * 10**n)/10**n
See also Python setting Decimal Place range without rounding?
Possible duplicate of Truncate to three decimals in Python
Extract x digits with the power of 10^x and then divide by the same:
>>> import math
>>> def truncate(number, digits) -> float:
... stepper = 10.0 ** digits
... return math.trunc(stepper * number) / stepper
>>> a
1.11114
>>> b
1.11118
>>> truncate(a,4) == truncate(b,4)
True
Solution by #Erwin Mayer
You can look at whether their differences is close to 0 with an absolute tolerance of 1e-4 with math.isclose:
>>> import math
>>> math.isclose(a - b, 0, abs_tol=1e-4)
True
Use round() in-built function -
a = round(a,4) # 4 is no. of digits you want
b = round(b,4)
if a >= b :
... # Do stuff

Python 3 Decimal precision

I have an issue with floating-point numbers:
a = 0.4812
b = 0.4813
a-b
-9.999999999998899e-05
a = Decimal(0.4812)
b = Decimal(0.4813)
a-b
Decimal('-0.000099999999999988986587595718447118997573852539062500')
How can I get exactly -0.0001?
You need to pass the numbers in as strings to the Decimal constructor, if you use float literals they've already lost precision before the Decimal object gets constructed.
>>> a = Decimal('0.4812')
>>> b = Decimal('0.4813')
>>> a - b
Decimal('-0.0001')
To illustrate more clearly:
>>> Decimal('0.4812')
Decimal('0.4812')
>>> Decimal(0.4812)
Decimal('0.481200000000000016608936448392341844737529754638671875')
If you want round it you can use this: round(-0.000099999999999988986587595718447118997573852539062500, 4)

Floor of very large floats in python

As Python has no limit over integers but has some limit over its floats.
How do I go about calculating floor of very large floats?
I am trying to calculate floor(A*B), where A is a small irrational number, possibly sqrt(2), e, sqrt(5) etc and B is a very large number in range 10^1000.
You can use decimal module:
>>> from decimal import Decimal
>>> from math import floor, sqrt
>>>
>>> d1 = Decimal(sqrt(2))
>>> d2 = Decimal(10**1000)
>>>
>>> result = d1 * d2
>>> floor(result)
You can also set the precision for the decimal using getcontext().prec in order to get more precise result.
>>> from decimal import *
>>> getcontext().prec = 100
>>> d1 = Decimal(2).sqrt()
>>> d1
Decimal('1.414213562373095048801688724209698078569671875376948073176679737990732478462107038850387534327641573')
64 bit floats don't go above ~10^308. So your 10^1000 is definitely not going to fit, regardless of multiplication by any constant. (No 64-bit float can be small enough to make 10^1000 less than 10^308.) So your procedure isn't going to work for floats.
Consider using the decimal module. E.g.:
import decimal
import math
a = decimal.Decimal("10") ** 10000
b = decimal.Decimal("0.123")
math.floor(a*b)

Clarification on the Decimal type in Python

Everybody knows, or at least, every programmer should know, that using the float type could lead to precision errors. However, in some cases, an exact solution would be great and there are cases where comparing using an epsilon value is not enough. Anyway, that's not really the point.
I knew about the Decimal type in Python but never tried to use it. It states that "Decimal numbers can be represented exactly" and I thought that it meant a clever implementation that allows to represent any real number. My first try was:
>>> from decimal import Decimal
>>> d = Decimal(1) / Decimal(3)
>>> d3 = d * Decimal(3)
>>> d3 < Decimal(1)
True
Quite disappointed, I went back to the documentation and kept reading:
The context for arithmetic is an environment specifying precision [...]
OK, so there is actually a precision. And the classic issues can be reproduced:
>>> dd = d * 10**20
>>> dd
Decimal('33333333333333333333.33333333')
>>> for i in range(10000):
... dd += 1 / Decimal(10**10)
>>> dd
Decimal('33333333333333333333.33333333')
So, my question is: is there a way to have a Decimal type with an infinite precision? If not, what's the more elegant way of comparing 2 decimal numbers (e.g. d3 < 1 should return False if the delta is less than the precision).
Currently, when I only do divisions and multiplications, I use the Fraction type:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
Is it the best approach? What could be the other options?
The Decimal class is best for financial type addition, subtraction multiplication, division type problems:
>>> (1.1+2.2-3.3)*10000000000000000000
4440.892098500626 # relevant for government invoices...
>>> import decimal
>>> D=decimal.Decimal
>>> (D('1.1')+D('2.2')-D('3.3'))*10000000000000000000
Decimal('0.0')
The Fraction module works well with the rational number problem domain you describe:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
For pure multi precision floating point for scientific work, consider mpmath.
If your problem can be held to the symbolic realm, consider sympy. Here is how you would handle the 1/3 issue:
>>> sympy.sympify('1/3')*3
1
>>> (sympy.sympify('1/3')*3) == 1
True
Sympy uses mpmath for arbitrary precision floating point, includes the ability to handle rational numbers and irrational numbers symbolically.
Consider the pure floating point representation of the irrational value of √2:
>>> math.sqrt(2)
1.4142135623730951
>>> math.sqrt(2)*math.sqrt(2)
2.0000000000000004
>>> math.sqrt(2)*math.sqrt(2)==2
False
Compare to sympy:
>>> sympy.sqrt(2)
sqrt(2) # treated symbolically
>>> sympy.sqrt(2)*sympy.sqrt(2)==2
True
You can also reduce values:
>>> import sympy
>>> sympy.sqrt(8)
2*sqrt(2) # √8 == √(4 x 2) == 2*√2...
However, you can see issues with Sympy similar to straight floating point if not careful:
>>> 1.1+2.2-3.3
4.440892098500626e-16
>>> sympy.sympify('1.1+2.2-3.3')
4.44089209850063e-16 # :-(
This is better done with Decimal:
>>> D('1.1')+D('2.2')-D('3.3')
Decimal('0.0')
Or using Fractions or Sympy and keeping values such as 1.1 as ratios:
>>> sympy.sympify('11/10+22/10-33/10')==0
True
>>> Fraction('1.1')+Fraction('2.2')-Fraction('3.3')==0
True
Or use Rational in sympy:
>>> frac=sympy.Rational
>>> frac('1.1')+frac('2.2')-frac('3.3')==0
True
>>> frac('1/3')*3
1
You can play with sympy live.
So, my question is: is there a way to have a Decimal type with an infinite precision?
No, since storing an irrational number would require infinite memory.
Where Decimal is useful is representing things like monetary amounts, where the values need to be exact and the precision is known a priori.
From the question, it is not entirely clear that Decimal is more appropriate for your use case than float.
is there a way to have a Decimal type with an infinite precision?
No; for any non-empty interval on the real line, you cannot represent all the numbers in the set with infinite precision using a finite number of bits. This is why Fraction is useful, as it stores the numerator and denominator as integers, which can be represented precisely:
>>> Fraction("1.25")
Fraction(5, 4)
If you are new to Decimal, this post is relevant: Python floating point arbitrary precision available?
The essential idea from the answers and comments is that for computationally tough problems where precision is needed, you should use the mpmath module https://code.google.com/p/mpmath/. An important observation is that,
The problem with using Decimal numbers is that you can't do much in the way of math functions on Decimal objects
Just to point out something that might not be immediately obvious to everyone:
The documentation for the decimal module says
... The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero.
(Also see the classic: Is floating point math broken?)
However, if we use decimal.Decimal naively, we get the same "unexpected" result
>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) == Decimal(0.3)
False
The problem in the naive example above is the use of float arguments, which are "losslessly converted to [their] exact decimal equivalent," as explained in the docs.
The trick (implicit in the accepted answer) is to construct the Decimal instances using e.g. strings, instead of floats
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') == Decimal('0.3')
True
or, perhaps more convenient in some cases, using tuples (<sign>, <digits>, <exponent>)
>>> Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) == Decimal((0, (3,), -1))
True
Note: this does not answer the original question, but it is closely related, and may be of help to people who end up here based on the question title.

getting Ceil() of Decimal in python?

Is there a way to get the ceil of a high precision Decimal in python?
>>> import decimal;
>>> decimal.Decimal(800000000000000000001)/100000000000000000000
Decimal('8.00000000000000000001')
>>> math.ceil(decimal.Decimal(800000000000000000001)/100000000000000000000)
8.0
math rounds the value and returns non precise value
The most direct way to take the ceiling of a Decimal instance x is to use x.to_integral_exact(rounding=ROUND_CEILING). There's no need to mess with the context here. Note that this sets the Inexact and Rounded flags where appropriate; if you don't want the flags touched, use x.to_integral_value(rounding=ROUND_CEILING) instead. Example:
>>> from decimal import Decimal, ROUND_CEILING
>>> x = Decimal('-123.456')
>>> x.to_integral_exact(rounding=ROUND_CEILING)
Decimal('-123')
Unlike most of the Decimal methods, the to_integral_exact and to_integral_value methods aren't affected by the precision of the current context, so you don't have to worry about changing precision:
>>> from decimal import getcontext
>>> getcontext().prec = 2
>>> x.to_integral_exact(rounding=ROUND_CEILING)
Decimal('-123')
By the way, in Python 3.x, math.ceil works exactly as you want it to, except that it returns an int rather than a Decimal instance. That works because math.ceil is overloadable for custom types in Python 3. In Python 2, math.ceil simply converts the Decimal instance to a float first, potentially losing information in the process, so you can end up with incorrect results.
x = decimal.Decimal('8.00000000000000000000001')
with decimal.localcontext() as ctx:
ctx.prec=100000000000000000
ctx.rounding=decimal.ROUND_CEILING
y = x.to_integral_exact()
You can do this using the precision and rounding mode option of the Context constructor.
ctx = decimal.Context(prec=1, rounding=decimal.ROUND_CEILING)
ctx.divide(decimal.Decimal(800000000000000000001), decimal.Decimal(100000000000000000000))
EDIT: You should consider changing the accepted answer.. Although the prec can be increased as needed, to_integral_exact is a simpler solution.
>>> decimal.Context(rounding=decimal.ROUND_CEILING).quantize(
... decimal.Decimal(800000000000000000001)/100000000000000000000, 0)
Decimal('9')
def decimal_ceil(x):
int_x = int(x)
if x - int_x == 0:
return int_x
return int_x + 1
Just use potency to make this.
import math
def lo_ceil(num, potency=0): # Use 0 for multiples of 1, 1 for multiples of 10, 2 for 100 ...
n = num / (10.0 ** potency)
c = math.ceil(n)
return c * (10.0 ** potency)
lo_ceil(8.0000001, 1) # return 10

Categories