Python 3 Decimal precision - python

I have an issue with floating-point numbers:
a = 0.4812
b = 0.4813
a-b
-9.999999999998899e-05
a = Decimal(0.4812)
b = Decimal(0.4813)
a-b
Decimal('-0.000099999999999988986587595718447118997573852539062500')
How can I get exactly -0.0001?

You need to pass the numbers in as strings to the Decimal constructor, if you use float literals they've already lost precision before the Decimal object gets constructed.
>>> a = Decimal('0.4812')
>>> b = Decimal('0.4813')
>>> a - b
Decimal('-0.0001')
To illustrate more clearly:
>>> Decimal('0.4812')
Decimal('0.4812')
>>> Decimal(0.4812)
Decimal('0.481200000000000016608936448392341844737529754638671875')

If you want round it you can use this: round(-0.000099999999999988986587595718447118997573852539062500, 4)

Related

Python Decimal module not rounding as expected

I am trying to use the Decimal module to do some FX calculations (instead of using floats).
However when I do the following, I do not get the expected value output:
>>> from decimal import Decimal
>>> x = Decimal(1.3755)
>>> y = Decimal(1.2627)
>>> z = y/(1/x)
>>> print(z)
1.736843849999999839084452447
>>>
The output should be: 1.73684385
I thought using Decimals would correct this issue. How can I resolve this rounding issue?
from decimal import Decimal
x = Decimal('1.3755')
y = Decimal('1.2627')
print(y/(1/x))
# 1.736843850000000000000000000
Use str to construct Decimal instead of real number.
String decimal objects are required:
>>> from decimal import Decimal
>>> x = Decimal('1.3755')
>>> y = Decimal('1.2627')
>>> z = y/(1/x)
>>> print(z)
1.736843850000000000000000000
>>>
Otherwise they would still be treated as regular python floats.
As #schwobaseggl mentioned:
Note that the float is passed to the Decimal constructor, so the precision is lost there already. The Decimal constructor has no chance of knowing the desired precision. The result is still a decimal object, but with the precision of the passed float.
you can use 0:.nf to round the numbers like below
:
print(z)
# 1.736843849999999839084452447
print("{0:.8f}".format(z))
# 1.73684385
try this, this will helps you, set your precision level
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')

Comparing two variables upto x decimal places

Let a = 1.11114
b = 1.11118
When I compare these two variables using the code below
if b <= a
I want the comparison to be done only up to 4 decimal places, such that a = b.
Can anyone help me with an efficient code?
Thank you!
To avoid rounding, you can multiply the number by power of 10 to cast to integer up to the decimal place you want to consider (to truncate the decimal part), and then divide by the same power to obtain the truncated float:
n = 4 # number of decimal digits you want to consider
a_truncated = int(a * 10**n)/10**n
See also Python setting Decimal Place range without rounding?
Possible duplicate of Truncate to three decimals in Python
Extract x digits with the power of 10^x and then divide by the same:
>>> import math
>>> def truncate(number, digits) -> float:
... stepper = 10.0 ** digits
... return math.trunc(stepper * number) / stepper
>>> a
1.11114
>>> b
1.11118
>>> truncate(a,4) == truncate(b,4)
True
Solution by #Erwin Mayer
You can look at whether their differences is close to 0 with an absolute tolerance of 1e-4 with math.isclose:
>>> import math
>>> math.isclose(a - b, 0, abs_tol=1e-4)
True
Use round() in-built function -
a = round(a,4) # 4 is no. of digits you want
b = round(b,4)
if a >= b :
... # Do stuff

Decimal fixed precision

I would like to use decimal in currency calculations, so I would like to work on exactly two numbers after comma. Initially I thought that prec of decimal's context refers to that property, but after a few experiments I feel a bit confused.
Experiment #1:
In [1]: import decimal
In [2]: decimal.getcontext().prec = 2
In [3]: a = decimal.Decimal('159.9')
In [4]: a
Out[4]: Decimal('159.9')
In [5]: b = decimal.Decimal('200')
In [6]: b
Out[6]: Decimal('200')
In [7]: b - a
Out[7]: Decimal('40')
Experiment #2:
In [8]: decimal.getcontext().prec = 4
In [9]: a = decimal.Decimal('159.9')
In [10]: a
Out[10]: Decimal('159.9')
In [11]: b = decimal.Decimal('200')
In [12]: b
Out[12]: Decimal('200')
In [13]: b - a
Out[13]: Decimal('40.1')
Experiment #3: (prec is still set to 4)
In [14]: a = decimal.Decimal('159999.9')
In [15]: a
Out[15]: Decimal('159999.9')
In [16]: b = decimal.Decimal('200000')
In [17]: b
Out[17]: Decimal('200000')
In [18]: b - a
Out[18]: Decimal('4.000E+4')
Why does it work like in my examples? How should I work with decimals in my (currency calculations) case?
The precision sets the number of significant digits, which is not equivalent to the number of digits after the decimal point.
So if you have a precision of 2, you'll have two significant digits, so a number with 3 significant digits like 40.1 will be reduced to the upper two significant digits giving 40.
There's no easy way to set the number of digits after the decimal point with Decimal. However you could use a high precision and always round your results to two decimals:
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 60 # use a higher/lower one if needed
>>> Decimal('200') - Decimal('159.9')
Decimal('40.1')
>>> r = Decimal('200') - Decimal('159.9')
>>> round(r, 2)
Decimal('40.10')
The decimal FAQ also includes a similar question and answer (using quantize):
Q. In a fixed-point application with two decimal places, some inputs have many places and need to be rounded. Others are not supposed to have excess digits and need to be validated. What methods should be used?
A. The quantize() method rounds to a fixed number of decimal places. If the Inexact trap is set, it is also useful for validation:
>>> TWOPLACES = Decimal(10) ** -2 # same as Decimal('0.01')
>>> # Round to two places
>>> Decimal('3.214').quantize(TWOPLACES)
Decimal('3.21')
>>> # Validate that a number does not exceed two places
>>> Decimal('3.21').quantize(TWOPLACES, context=Context(traps=[Inexact]))
Decimal('3.21')
>>> Decimal('3.214').quantize(TWOPLACES, context=Context(traps=[Inexact]))
Traceback (most recent call last):
...
Inexact: None
Q. Once I have valid two place inputs, how do I maintain that invariant throughout an application?
A. Some operations like addition, subtraction, and multiplication by an integer will automatically preserve fixed point. Others operations, like division and non-integer multiplication, will change the number of decimal places and need to be followed-up with a quantize() step:
>>> a = Decimal('102.72') # Initial fixed-point values
>>> b = Decimal('3.17')
>>> a + b # Addition preserves fixed-point
Decimal('105.89')
>>> a - b
Decimal('99.55')
>>> a * 42 # So does integer multiplication
Decimal('4314.24')
>>> (a * b).quantize(TWOPLACES) # Must quantize non-integer multiplication
Decimal('325.62')
>>> (b / a).quantize(TWOPLACES) # And quantize division
Decimal('0.03')
In developing fixed-point applications, it is convenient to define functions to handle the quantize() step:
>>> def mul(x, y, fp=TWOPLACES):
... return (x * y).quantize(fp)
>>> def div(x, y, fp=TWOPLACES):
... return (x / y).quantize(fp)
>>> mul(a, b) # Automatically preserve fixed-point
Decimal('325.62')
>>> div(b, a)
Decimal('0.03')
The best method I've found is to set a high prec and use Decimal.quantize to round your result:
decimal.getcontext().prec=100
a = Decimal('400000.123456789')
b = Decimal('200000.0')
a-b
>>> Decimal('2.0000E+5')
(a-b).quantize(Decimal('0.01'))
>>> Decimal('200000.12')

Python - float arithmetic as string

I am working with Python and have two strings which contain float style values, For example:
a = '0.0000001'
b = '0.0003599'
I am looking for a solution to simply add or subtract the two values together to a new string, keeping the decimal precision intact. I have tried converting them to a float and using a + b etc but this seems to be inconsistent.
So the resulting string in this example would be string
c = '0.0003600'
I've been over a number of examples/methods and not quite found the answer. Any help appreciated.
Looks like the decimal module should do what you want:
>>> from decimal import *
>>> a = '0.0000001'
>>> b = '0.0003599'
>>> Decimal(a)+Decimal(b)
Decimal('0.0003600')
mpmath library could do arbitrary precision float arithmetic:
>>> from mpmath import mpf
>>> a = mpf('0.0003599')
>>> b = mpf('0.0000001')
>>> print(a + b)
0.00036
Converting to a float is fine for all cases I can think of:
>>> str(sum(float(i) for i in (a, b)))
'0.00036'
>>> str(sum(map(float, (a, b))))
'0.00036'
>>> str(float(a) + float(b))
'0.00036'

how can I round a value with 1 decimal to 2 decimals

I need to round values to exactly 2 decimals, but when I use round() it doesn't work, if the value is 0.4, but I need 0.40.
round(0.4232323, 2) = 0.42
bad: round(0.4, 2) = 0.4
How can I solve this?
0.4 and 0.40 are mathematically equivalent.
If you want to display them with two decimal places, use {:.2f} formatting:
>>> '{:.2f}'.format(0.4)
'0.40'
print("{0:.2f}".format(round(0.4232323, 2)))
If you represent these values as floats then there is no difference between 0.4 and 0.40. To print them with different precision is just a question of format strings (as per the other two answers).
However, if you want to work with decimals, there is a decimal module in Python.
>>> from decimal import Decimal
>>> a = Decimal("0.40")
>>> b = Decimal("0.4")
# They have equal values
>>> a == b
True
# But maintain their precision
>>> a + 1
Decimal('1.40')
>>> b + 1
Decimal('1.4')
>>> a - b
Decimal('0.00')
Use the quantize method to round to a particular number of places. For example:
>>> c = Decimal(0.4232323)
>>> c.quantize(Decimal("0.00"))
Decimal('0.42')
>>> str(c.quantize(Decimal("0.00")))
'0.42'

Categories