In Python, you can have arbitrarily large integers (though it might take up all your memory to store them). With floats on the other hand they will overflow eventually and you lose precision if you try to make one with too many decimal points.
Which one is the decimal.Decimal class?
Can I have an arbitrary amount of numbers before the decimal point? What about after the decimal point? What are the limits?
Decimal has a user-specified precision; you can modify the current context to specify how many decimal digits must be preserved. Per the docs:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem.
That is then followed by an example:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
You could turn the precision up to 1,000,000 or more if you really needed it, but math will be slow. In practice, most problems have a point at which you don't need any more precision; NASA stops at 16 digits of pi (15 after the decimal) and at the scale of the entire solar system that has an error margin of only an inch or two for any calculations they care about; they'd need more places to the left of the decimal for magnitude, but for most purposes the default precision of 28 is enough, and turning it up to 40 or 50 should cover any use cases relating to numbers from the real world.
The limit isn't to the left or the right of the decimal place. It's the total number of digits. So if you have Decimal(1000000) + Decimal("0.0001"), with a precision setting of 10 or less, the result will be equal to Decimal(1000000) (it may display additional zeroes to match the precision, but the 0.0001 is dropped). The behavior in cases of rounding or overflow is configurable with signals, so it can set flags, or raise exceptions when it occurs. The docs go into this is in greater detail.
Related
from decimal import *
getcontext().prec = 8
print(getcontext(),"\n")
x_amount = Decimal(0.025)
y_amount = Decimal(0.005)
test3 = x_amount - y_amount
print("test3",test3)
Output:
Context(prec=8, rounding=ROUND_HALF_EVEN, Emin=-999999, Emax=999999, capitals=1, clamp=0, flags=[], traps=[InvalidOperation, DivisionByZero, Overflow])
test3 0.020000000
Why does this return value of 'test3' up to 9 decimal places if the precision is set to 8 according to example mentioned here?
And it changes to 3 decimal places if I replace Line 6 and 7 in the above code with:
x_amount = Decimal('0.025')
y_amount = Decimal('0.005')
I am using decimals in a financial application and find it very confusing about conversions, operations, definition, precision etc. Is there a link which I can refer to know the details of using decimals in python?
Your result is correct. prec says how many digits to keep starting from the most significant non-zero digit. So in your result:
test3 0.020000000
^^^^^^^^
the digits pointed to by carets are the expected eight digits covered by the precision. The reason you get all of them is that using float to initialize Decimal is always wrong. Never initialize a Decimal from a float; it just reproduces the inaccuracy of float in the Decimal (try printing x_value and y_value to see the garbage). Passing a str is the only safe way to do this.
When you do this with str, the individual Decimals "know" the last digit of meaningful precision they possess, and it only goes to the third decimal place, so the result doesn't include precision beyond that point. If you want prec capping to kick in, try initializing one of the arguments with more precision than prec, and the result will be rounded to prec.
Please look at the below Python code that I've entered into a Python 3.6 interpreter:
>>> 0.00225 * 100.0
0.22499999999999998
>>> '{:.2f}'.format(0.00225 * 100.0)
'0.22'
>>> '{:.2f}'.format(0.225)
'0.23'
>>> '{:.2f}'.format(round(0.00225 * 100.0, 10))
'0.23'
Hopefully you can immediately understand why I'm frustrated. I am attempting to display value * 100.0 on my GUI, storing the full precision behind a cell but only displaying 2 decimal points (or whatever the users precision setting is). The GUI is similar to an Excel spreadsheet.
I'd prefer not to lose the precision of something like 0.22222444937645 and round by 10, but I also don't want a value such as 0.00225 * 100.0 displaying as 0.22.
I'm interested in hearing about a standard way of approaching a situation like this or a remedy for my specific situation. Thanks ahead of time for any help.
Consider using the Decimal module, which "provides support for fast correctly-rounded decimal floating point arithmetic." The primary advantages of Decimal relevant to your use case are:
Decimal numbers can be represented exactly. In contrast, numbers like 1.1 and 2.2 do not have exact representations in binary floating point. End users typically would not expect 1.1 + 2.2 to display as 3.3000000000000003 as it does with binary floating point.
The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero. In binary floating point, the result is 5.5511151231257827e-017. While near to zero, the differences prevent reliable equality testing and differences can accumulate. For this reason, decimal is preferred in accounting applications which have strict equality invariants.
Based on the information you've provided in the question, I cannot say how much of an overhaul migrating to Decimal would require. However, if you're creating a spreadsheet-like application and always want to preserve maximal precision, then you will probably want to refactor to use Decimal sooner or later to avoid unexpected numbers in your user-facing GUI.
To get the behavior you desire, you may need to change the rounding mode (which defaults to ROUND_HALF_EVEN) for Decimal instances.
from decimal import getcontext, ROUND_HALF_UP
getcontext().rounding = ROUND_HALF_UP
n = round(Decimal('0.00225') * Decimal('100'), 2)
print(n) # prints Decimal('0.23')
m = round(Decimal('0.00225') * 100, 2)
print(m) # prints Decimal('0.23')
perhaps use decimal? docs.python.org/2/library/decimal.html
from decimal import *
getcontext().prec = 2
n = Decimal.from_float(0.00225)
m = n * 100
print(n, m)
print(m.quantize(Decimal('.01'), rounding=ROUND_DOWN))
print(m.quantize(Decimal('.01'), rounding=ROUND_UP)
Importing Fraction from fractions to give a fractional representation of a real number, but giving responses quite complicated which seems very simple by the paper-pen method.
Fractions(.2) giving answer 3602879701896397/18014398509481984,
which is 0.20000000000000001110223024625157, almost .2, but I want it to be simply 1/5.
I know there's limit() for this use, but what I simply required is smallest numerator and denominator which gives the exact real number bcoz I am dealing with a lot of numbers in a big range so i cant use same limit() argument for all.
You can use the Fraction class to represent 0.2, and you can access the numerator and denominator as follows:
>>> from fractions import Fraction
>>> f = Fraction(1, 5)
>>> f.numerator
1
>>> f.denominator
5
Hope it helps.
Your strange output results from float point problems. You can in certain cases overcome this by limiting the denominator with Fraction.limit_denominator(). This procedure can of course also cause rounding errors, if the real value of the denominator is larger than the threshold you use. The default value for this threshold is 1,000,000, but you can also use smaller values.
>>> import fractions
>>> print(fractions.Fraction(0.1))
3602879701896397/36028797018963968
>>> # lower the threshold to 1000
>>> print(fractions.Fraction(0.1).limit_denominator(1000))
1/10
>>> # alternatively, use a str representation as per documentation/examples
>>> print(fractions.Fraction('0.1'))
1/10
>>> # won't work for smaller fractions, use default of 1,000,000 instead
>>> print(fractions.Fraction(0.00001).limit_denominator(1000))
0
>>> print(fractions.Fraction(0.00001).limit_denominator())
1/100000
Of course, as explained in the first sentence, there are precision limitations due to the way float numbers are stored. If you have numbers in the magnitude of 10^9, you won't get an accurate representation of 10 digits in the fractional part as
a = 1234567890.0987654321
print(a)
demonstrates. But you might ask yourself, if you really need an accuracy of 10^-15, if your input doesn't reflect this accuracy. If you want to have a higher precision, you have to use the decimal module right from the start with increased precision level throughout all mathematical operations. Even better is to take care of numerators and denominators as integer values from the beginning - in Python integer values are theoretically not restricted in size
This question already has answers here:
How can I force division to be floating point? Division keeps rounding down to 0?
(11 answers)
Closed 9 years ago.
I'd like to pass numbers around between functions, while preserving the decimal places for the numbers.
I've discovered that if I pass a float like '10.00' in to a function, then the decimal places don't get used. This messes an operation like calculating percentages.
For example, x * (10 / 100) will always return 0.
But if I manage to preserve the decimal places, I end up doing x * (10.00 / 100). This returns an accurate result.
I'd like to have a technique that enables consistency when I'm working with numbers that decimal places that can hold zeroes.
When you write
10 / 100
you are performing integer division. That's because both operands are integers. The result is 0.
If you want to perform floating point division, make one of the operands be a floating point value. For instance:
10.0 / 100
or
float(10) / 100
Do beware also that
10.0 / 100
results in a binary floating point value and binary floating data types cannot represent the true result value of 0.1. So if you want to represent the result accurately you may need to use a decimal data type. The decimal module has the functionality needed for that.
Division in python for float and int works differently, take a look at this question and it's answers: Python division.
Moreover, if you are looking for a solution to format a decimal floating point of your figures into string, you might need to use %f.
Python
# '1.000000'
"%f" % (1.0)
# '1.00'
"%.2f" % (1.0)
# ' 1.00'
"%6.2f" % (1.0)
Python 2.x will use integer division when dividing two integers unless you explicitly tell it to do otherwise. Two integers in --> one integer out.
Python 3 onwards will return, to quote PEP 238 http://www.python.org/dev/peps/pep-0238/ a reasonable approximation of the result of the division approximation, i.e. it will perform a floating point division and return the result without rounding.
To enable this behaviour in earlier version of Python you can use:
from __future__ import division
At the very top of the module, this should get you the consistent results you want.
You should use the decimal module. Each number knows how many significant digits it has.
If you're trying to preserve significant digits, the decimal module is has everything you need. Example:
>>> from decimal import Decimal
>>> num = Decimal('10.00')
>>> num
Decimal('10.00')
>>> num / 10
Decimal('1.00')
The built-in Python str() function outputs some weird results when passing in floats with many decimals. This is what happens:
>>> str(19.9999999999999999)
>>> '20.0'
I'm expecting to get:
>>> '19.9999999999999999'
Does anyone know why? and maybe workaround it?
Thanks!
It's not str() that rounds, it's the fact that you're using floats in the first place. Float types are fast, but have limited precision; in other words, they are imprecise by design. This applies to all programming languages. For more details on float quirks, please read "What Every Programmer Should Know About Floating-Point Arithmetic"
If you want to store and operate on precise numbers, use the decimal module:
>>> from decimal import Decimal
>>> str(Decimal('19.9999999999999999'))
'19.9999999999999999'
A float has 32 bits (in C at least). One of those bits is allocated for the sign, a few allocated for the mantissa, and a few allocated for the exponent. You can't fit every single decimal to an infinite number of digits into 32 bits. Therefore floating point numbers are heavily based on rounding.
If you try str(19.998), it will probably give you something at least close to 19.998 because 32 bits have enough precision to estimate that, but something like 19.999999999999999 is too precise to estimate in 32 bits, so it rounds to the nearest possible value, which happens to be 20.
Please note that this is a problem of understanding floating point (fixed-length) numbers. Most languages do exactly (or very similar to) what Python does.
Python float is IEEE 754 64-bit binary floating point. It is limited to 53 bits of precision i.e. slightly less than 16 decimal digits of precision. 19.9999999999999999 contains 18 decimal digits; it cannot be represented exactly as a float. float("19.9999999999999999") produces the nearest floating point value, which happens to be the same as float("20.0").
>>> float("19.9999999999999999") == float("20.0")
True
If by "many decimals" you mean "many digits after the decimal point", please be aware that the same "weird" results happen when there are many decimal digits before the decimal point:
>>> float("199999999999999999")
2e+17
If you want the full float precision, don't use str(), use repr():
>>> x = 1. / 3.
>>> str(x)
'0.333333333333'
>>> str(x).count('3')
12
>>> repr(x)
'0.3333333333333333'
>>> repr(x).count('3')
16
>>>
Update It's interesting how often decimal is prescribed as a cure-all for float-induced astonishment. This is often accompanied by simple examples like 0.1 + 0.1 + 0.1 != 0.3. Nobody stops to point out that decimal has its share of deficiencies e.g.
>>> (1.0 / 3.0) * 3.0
1.0
>>> (Decimal('1.0') / Decimal('3.0')) * Decimal('3.0')
Decimal('0.9999999999999999999999999999')
>>>
True, float is limited to 53 binary digits of precision. By default, decimal is limited to 28 decimal digits of precision.
>>> Decimal(2) / Decimal(3)
Decimal('0.6666666666666666666666666667')
>>>
You can change the limit, but it's still limited precision. You still need to know the characteristics of the number format to use it effectively without "astonishing" results, and the extra precision is bought by slower operation (unless you use the 3rd-party cdecimal module).
For any given binary floating point number, there is an infinite set of decimal fractions that, on input, round to that number. Python's str goes to some trouble to produce the shortest decimal fraction from this set; see GLS's paper http://kurtstephens.com/files/p372-steele.pdf for the general algorithm (IIRC they use a refinement that avoids arbitrary-precision math in most cases). You happened to input a decimal fraction that rounds to a float (IEEE double) whose shortest possible decimal fraction is not the same as the one you entered.