from decimal import *
getcontext().prec = 8
print(getcontext(),"\n")
x_amount = Decimal(0.025)
y_amount = Decimal(0.005)
test3 = x_amount - y_amount
print("test3",test3)
Output:
Context(prec=8, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
test3 0.020000000
Why does this return value of 'test3' up to 9 decimal places if the precision is set to 8 according to example mentioned here?
And it changes to 3 decimal places if I replace Line 6 and 7 in the above code with:
x_amount = Decimal('0.025')
y_amount = Decimal('0.005')
I am using decimals in a financial application and find it very confusing about conversions, operations, definition, precision etc. Is there a link which I can refer to know the details of using decimals in python?
Your result is correct. prec says how many digits to keep starting from the most significant non-zero digit. So in your result:
test3 0.020000000
^^^^^^^^
the digits pointed to by carets are the expected eight digits covered by the precision. The reason you get all of them is that using float to initialize Decimal is always wrong. Never initialize a Decimal from a float; it just reproduces the inaccuracy of float in the Decimal (try printing x_value and y_value to see the garbage). Passing a str is the only safe way to do this.
When you do this with str, the individual Decimals "know" the last digit of meaningful precision they possess, and it only goes to the third decimal place, so the result doesn't include precision beyond that point. If you want prec capping to kick in, try initializing one of the arguments with more precision than prec, and the result will be rounded to prec.
Related
From the documentation page of Decimal, I thought that once we use decimal to compute, it'll be a correct result without any floating error.
But when I try this equation
from decimal import Decimal, getcontext
getcontext().prec = 250
a = Decimal('6')
b = Decimal('500000')
b = a ** b
print('prec: ' + str(getcontext().prec) + ', ', end='')
print(b.ln() / a.ln())
It gives me different result!
I want to calculate the digit of 6**500000 in base-6 representation, so my expect result would be int(b.ln() / a.ln()) + 1, which should be 500001. However, when I set the prec to 250, it gives me the wrong result. How can I solve this?
Also, if I want to output the result without the scientific notation (i.e. 5E+5), what should I do?
The Documentation clearly show the parameters of getcontext()
when you simply execute getcontext() it can show its built-in parameters.
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
When you can change getcontext().prec = 250 then it can only overwrite prec value.
the main thing which can effect the result is rounding parameter right after the prec, It is built-in parameter rounding=ROUND_HALF_EVEN. I want to change this to None but it can show an error.
so it can clarify that no matter what we can do it must change even slightly because of rounding parameter.
Note: your result may also effect because of other built-in parameters.
If you're after very precise or symbolic math and/or being bitten by IEEE 754 float aliasing, check out SymPy
>>> from sympy import log, N
>>> expr = log(6**50000) / log(6)
>>> N(expr)
50000.0000000000
The documentation for Decimal.ln() states:
Return the natural (base e) logarithm of the operand. The result is correctly rounded using the ROUND_HALF_EVEN rounding mode.
When you changed the precision, more digits of the numbers were calculated and they are then rounded down instead of up. The number needs to be rounded because it is not necessarily within the specified precision, or even rational at all.
For outputting results in scientific notation, see this question.
When I convert a float to decimal.Decimal in Python and afterwards call Decimal.shift it may give me completely wrong and unexpected results depending on the float. Why is this the case?
Converting 123.5 and shifting it:
from decimal import Decimal
a = Decimal(123.5)
print(a.shift(1)) # Gives expected result
The code above prints the expected result of 1235.0.
If I instead convert and shift 123.4:
from decimal import Decimal
a = Decimal(123.4)
print(a.shift(1)) # Gives UNexpected result
it gives me 3.418860808014869689941406250E-18 (approx. 0) which is completely unexpected and wrong.
Why is this the case?
Note:
I understand the floating-point imprecision because of the representation of floats in memory. However, I can't explain why this should give me such a completely wrong result.
Edit:
Yes, in general it would be best to not convert the floats to decimals but convert strings to decimals instead. However, this is not the point of my question. I want to understand why the shifting after float conversion gives such a completely wrong result. So if I print(Decimal(123.4)) it gives 123.40000000000000568434188608080148696899414062 so after shifting I would expect it to be 1234.0000000000000568434188608080148696899414062 and not nearly zero.
You need to change the Decimal constructor input to use strings instead of floats.
a = Decimal('123.5')
print(a.shift(1))
a = Decimal('123.4')
print(a.shift(1))
or
a = Decimal(str(123.5))
print(a.shift(1))
a = Decimal(str(123.4))
print(a.shift(1))
The output will be as expected.
>>> 1235.0
>>> 1234.0
Decimal instances can be constructed from integers, strings, floats, or tuples. Construction from an integer or a float performs an exact conversion of the value of that integer or float.
For floats, Decimal calls Decimal.from_float()
Note that Decimal.from_float(0.1) is not the same as Decimal('0.1'). Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is 0x1.999999999999ap-4. The exact equivalent of the value in decimal is 0.1000000000000000055511151231257827021181583404541015625.
Internally, the Python decimal library converts a float into two integers representing the numerator and denominator of a fraction that yields the float.
n, d = abs(123.4).as_integer_ratio()
It then calculates the bit length of the denominator, which is the number of bits required to represent the number in binary.
k = d.bit_length() - 1
And then from there the bit length k is used to record the coefficient of the decimal number by multiplying the numerator * 5 to the power of the bit length of the denominator.
coeff = str(n*5**k)
The resulting values are used to create a new Decimal object with constructor arguments of sign, coefficient, and exponent using this values.
For the float 123.5 these values are
>>> 1 1235 -1
and for the float 123.4 these values are
1 123400000000000005684341886080801486968994140625 -45
So far, nothing is amiss.
However when you call shift, the Decimal library has to calculate how much to pad the number with zeroes based on the shift you've specified. To do this internally it takes the precision subtracted by length of the coefficient.
amount_to_pad = context.prec - len(coeff)
The default precision is only 28 and with a float like 123.4 the coefficient becomes much longer than the default precision as noted above. This creates a negative amount to pad with zeroes and makes the number very tiny as you noted.
A way around this is to increase the precision to the length of the exponent + the length of the number you started with (45 + 4).
from decimal import Decimal, getcontext
getcontext().prec = 49
a = Decimal(123.4)
print(a)
print(a.shift(1))
>>> 123.400000000000005684341886080801486968994140625
>>> 1234.000000000000056843418860808014869689941406250
The documentation for shift hints that the precision is important for this calculation:
The second operand must be an integer in the range -precision through precision.
However it does not explain this caveat for floats that don't play nice with memory limitations.
I would expect this to raise some kind of error and prompt you to change your input or increase the precision, but at least you know!
#MarkDickinson noted in a comment above that you can view this Python bug tracker for more information: https://bugs.python.org/issue7233
In Python, you can have arbitrarily large integers (though it might take up all your memory to store them). With floats on the other hand they will overflow eventually and you lose precision if you try to make one with too many decimal points.
Which one is the decimal.Decimal class?
Can I have an arbitrary amount of numbers before the decimal point? What about after the decimal point? What are the limits?
Decimal has a user-specified precision; you can modify the current context to specify how many decimal digits must be preserved. Per the docs:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem.
That is then followed by an example:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You could turn the precision up to 1,000,000 or more if you really needed it, but math will be slow. In practice, most problems have a point at which you don't need any more precision; NASA stops at 16 digits of pi (15 after the decimal) and at the scale of the entire solar system that has an error margin of only an inch or two for any calculations they care about; they'd need more places to the left of the decimal for magnitude, but for most purposes the default precision of 28 is enough, and turning it up to 40 or 50 should cover any use cases relating to numbers from the real world.
The limit isn't to the left or the right of the decimal place. It's the total number of digits. So if you have Decimal(1000000) + Decimal("0.0001"), with a precision setting of 10 or less, the result will be equal to Decimal(1000000) (it may display additional zeroes to match the precision, but the 0.0001 is dropped). The behavior in cases of rounding or overflow is configurable with signals, so it can set flags, or raise exceptions when it occurs. The docs go into this is in greater detail.
Why happen this conversion to a number with 2 decimals?
x = 369.69
y=decimal.Decimal(x)
Decimal('369.68999999999999772626324556767940521240234375')
even if I've declared
getcontext().prec = 2
?
Then why if I try to get the roundup to get 370.00:
y.quantize(decimal.Decimal('0.01'),rounding=decimal.ROUND_UP)
end up with this error:
InvalidOperation: quantize result has too many digits for current
context
quantize result has too many digits for current context
The issue is that x is a float, and so you've lost precision as soon as you assign to x. If you want to work round this, you could make x a string "369.69". A Decimal built from a string will have the exact precision.
When you create a Decimal object, the prec is ignored. From the documentation:
The significance of a new Decimal is determined solely by the number of digits input. Context precision and rounding only come into play during arithmetic operations.
The error you're getting in the quantize is because the number of digits in the result is greater than the set precision. prec sets the total number of digits, not the digits after the decimal point.
>>> y = decimal.Decimal(69.69)
>>> y
Decimal('69.68999999999999772626324556767940521240234375')
>>> y.quantize(decimal.Decimal('1'), rounding=decimal.ROUND_UP)
Decimal('70')
I have a list and it contains a certain number '5.74536541' in it which I convert to a float.
I am printing it out in Python 3 using ("%0.2f" % (variable)) but it always prints out 5.75 instead of 5.74.
I know you're thinking who cares, but it is for a currency converter program and I don't want the currencies to round up/down but to be exact.
How can I keep it from rounding but also keep the 2 decimal places?
You shouldn't use floating point numbers for currency, due to rounding errors like you mentioned.
Your best bet is to use a fixed-precision decimal where you also have full control over how rounding and truncation works. From the docs:
>>> from decimal import *
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999,
capitals=1, flags=[], traps=[Overflow, DivisionByZero,
InvalidOperation])
>>> getcontext().prec = 6
>>> Decimal('3.0')
Decimal('3.0')
>>> Decimal('3.1415926535')
Decimal('3.1415926535')
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85987')
>>> getcontext().rounding = ROUND_UP
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85988')
You should represent all currency-based values internally as Decimals with a high precision (the standard level of precision should be fine in your case - just leave the prec alone!). If you want to print a nicely formatted dollars and cents value to the user, using the locale module is a straightforward way to do this.
Be careful when printing as you will have to quantize the Decimal down to the correct number of places for display or the rounding will not be based on your Decimal context! You should only perform the quantize step for final display or for a single, final value - all intermediate steps should use high-precision Decimals to make any operations as accurate as possible.
>>> from decimal import *
>>> import locale
>>> locale.setlocale(locale.LC_ALL, '')
'en_AU.UTF-8'
>>> getcontext().rounding = ROUND_DOWN
>>> TWOPLACES = Decimal(10) ** -2
>>> var = Decimal('5.74536541')
Decimal('5.74536541')
>>> var.quantize(TWOPLACES)
Decimal('5.74')
>>> locale.currency(var.quantize(TWOPLACES))
'$5.74'
If you're dealing with currency and accuracy matters, don't use float, use decimal.
Take away the number mod 0.01
i.e.
rounded = number - (number % 0.01)
then print it the same as before.
This said, rounding down is not more accurate. Are you trying the old steal money from a bank by exploiting rounding errors scheme?
Floating point values are known as "useful approximations". Whatever you do to a floating point number—round it, truncate it, whatever—if the result is a floating point value, you don't get to decide how many digits to the right of the decimal point it has.
Never use floating point values for currency. See pydoc decimal, for example. Python's decimal module supports decimal fixed point and decimal floating point arithmetic.
Python docs warn about rounding floats.
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
If you're not careful, you'll be misled by the value that appears at the interpreter prompt.
Python only prints a decimal approximation to the true decimal value
of the binary approximation stored by the machine.
And
It’s important to realize that this is, in a real sense, an illusion:
the value in the machine is not exactly 1/10, you’re simply rounding
the display of the true machine value. This fact becomes apparent as
soon as you try to do arithmetic with these values
If the number is a string then truncate the string to only 2 characters after the decimal and then convert it to a float.
Otherwise multiply it with 10^n where n is the number of digits after the decimal and then divide your float by 10^n.