Convert to Float without Rounding Decimal Places - python

I have a list and it contains a certain number '5.74536541' in it which I convert to a float.
I am printing it out in Python 3 using ("%0.2f" % (variable)) but it always prints out 5.75 instead of 5.74.
I know you're thinking who cares, but it is for a currency converter program and I don't want the currencies to round up/down but to be exact.
How can I keep it from rounding but also keep the 2 decimal places?

You shouldn't use floating point numbers for currency, due to rounding errors like you mentioned.
Your best bet is to use a fixed-precision decimal where you also have full control over how rounding and truncation works. From the docs:
>>> from decimal import *
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999,
capitals=1, flags=[], traps=[Overflow, DivisionByZero,
InvalidOperation])
>>> getcontext().prec = 6
>>> Decimal('3.0')
Decimal('3.0')
>>> Decimal('3.1415926535')
Decimal('3.1415926535')
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85987')
>>> getcontext().rounding = ROUND_UP
>>> Decimal('3.1415926535') + Decimal('2.7182818285')
Decimal('5.85988')
You should represent all currency-based values internally as Decimals with a high precision (the standard level of precision should be fine in your case - just leave the prec alone!). If you want to print a nicely formatted dollars and cents value to the user, using the locale module is a straightforward way to do this.
Be careful when printing as you will have to quantize the Decimal down to the correct number of places for display or the rounding will not be based on your Decimal context! You should only perform the quantize step for final display or for a single, final value - all intermediate steps should use high-precision Decimals to make any operations as accurate as possible.
>>> from decimal import *
>>> import locale
>>> locale.setlocale(locale.LC_ALL, '')
'en_AU.UTF-8'
>>> getcontext().rounding = ROUND_DOWN
>>> TWOPLACES = Decimal(10) ** -2
>>> var = Decimal('5.74536541')
Decimal('5.74536541')
>>> var.quantize(TWOPLACES)
Decimal('5.74')
>>> locale.currency(var.quantize(TWOPLACES))
'$5.74'

If you're dealing with currency and accuracy matters, don't use float, use decimal.

Take away the number mod 0.01
i.e.
rounded = number - (number % 0.01)
then print it the same as before.
This said, rounding down is not more accurate. Are you trying the old steal money from a bank by exploiting rounding errors scheme?

Floating point values are known as "useful approximations". Whatever you do to a floating point number—round it, truncate it, whatever—if the result is a floating point value, you don't get to decide how many digits to the right of the decimal point it has.
Never use floating point values for currency. See pydoc decimal, for example. Python's decimal module supports decimal fixed point and decimal floating point arithmetic.
Python docs warn about rounding floats.
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float.
If you're not careful, you'll be misled by the value that appears at the interpreter prompt.
Python only prints a decimal approximation to the true decimal value
of the binary approximation stored by the machine.
And
It’s important to realize that this is, in a real sense, an illusion:
the value in the machine is not exactly 1/10, you’re simply rounding
the display of the true machine value. This fact becomes apparent as
soon as you try to do arithmetic with these values

If the number is a string then truncate the string to only 2 characters after the decimal and then convert it to a float.
Otherwise multiply it with 10^n where n is the number of digits after the decimal and then divide your float by 10^n.

Related

How to avoid incorrect rounding with numpy.round?

I'm working with floating point numbers. If I do:
import numpy as np
np.round(100.045, 2)
I get:
Out[15]: 100.04
Obviously, this should be 100.05. I know about the existence of IEEE 754 and that the way that floating point numbers are stored is the cause of this rounding error.
My question is: how can I avoid this error?
You are partly right, often the cause of this "incorrect rounding" is because of the way floating point numbers are stored. Some float literals can be represented exactly as floating point numbers while others cannot.
>>> a = 100.045
>>> a.as_integer_ratio() # not exact
(7040041011254395, 70368744177664)
>>> a = 0.25
>>> a.as_integer_ratio() # exact
(1, 4)
It's also important to know that there is no way you can restore the literal you used (100.045) from the resulting floating point number. So the only thing you can do is to use an arbitrary precision data type instead of the literal. For example you could use Fraction or Decimal (just to mention two built-in types).
I mentioned that you cannot restore the literal once it is parsed as float - so you have to input it as string or something else that represents the number exactly and is supported by these data types:
>>> from fractions import Fraction
>>> f = Fraction(100045, 100)
>>> f
Fraction(20009, 20)
>>> f = Fraction("100.045")
>>> f
Fraction(20009, 20)
>>> from decimal import Decimal
>>> Decimal("100.045")
Decimal('100.045')
However these don't work well with NumPy and even if you get it to work at all - it will almost certainly be very slow compared to basic floating point operations.
>>> import numpy as np
>>> a = np.array([Decimal("100.045") for _ in range(1000)])
>>> np.round(a)
AttributeError: 'decimal.Decimal' object has no attribute 'rint'
In the beginning I said that you're are only partly right. There is another twist!
You mentioned that rounding 100.045 will obviously give 100.05. But that's not obvious at all, in your case it is even wrong (in the context of floating point math in programming - it would be true for "normal calculations"). In many programming languages a "half" value (where the number after the decimal you're rounding is 5) isn't always rounded up - for example Python (and NumPy) use a "round half to even" approach because it's less biased. For example 0.5 will be rounded to 0 while 1.5 will be rounded to 2.
So even if 100.045 could be represented exactly as float - it would still round to 100.04 because of that rounding rule!
>>> round(Fraction("100.045"), 1)
Fraction(5002, 5)
>>> 5002 / 5
1000.4
>>> d = Decimal("100.045")
>>> round(d, 2)
Decimal('100.04')
This is even mentioned in the NumPy docs for numpy.around:
Notes
For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [R1011] and errors introduced when scaling by powers of ten.
(Emphasis mine.)
The only (at least that I know) numeric type in Python that allows setting the rounding rule manually is Decimal - via ROUND_HALF_UP:
>>> from decimal import Decimal, getcontext, ROUND_HALF_UP
>>> dc = getcontext()
>>> dc.rounding = ROUND_HALF_UP
>>> d = Decimal("100.045")
>>> round(d, 2)
Decimal('100.05')
Summary
So to avoid the "error" you have to:
Prevent Python from parsing it as floating point value and
use a data type that can represent it exactly
then you have to manually override the default rounding mode so that you will get rounding up for "halves".
(abandon NumPy because it doesn't have arbitrary precision data types)
Basically there is no general solution for this problem IMO, unless you have a general rule for all the different cases (see Floating Point Arithmetic: Issues and Limitation). However, in this case you can round the decimal part separately:
In [24]: dec, integ = np.modf(100.045)
In [25]: integ + np.round(dec, 2)
Out[25]: 100.05
The reason for such behavior is not because separating integer from decimal part makes any difference on round()'s logic. It's because when you use fmod it gives you a more realistic version of the decimal part of the number which is actually a rounded representation.
In this case here is what dec is:
In [30]: dec
Out[30]: 0.045000000000001705
And you can check that round gives same result with 0.045:
In [31]: round(0.045, 2)
Out[31]: 0.04
Now if you try with another number like 100.0333, the decimal part is a slightly smaller version which as I mentioned, the result you want depends on your rounding policies.
In [37]: dec, i = np.modf(100.0333)
In [38]: dec
Out[38]: 0.033299999999997
There are also modules like fractions and decimal that provide support for fast correctly-rounded decimal floating point and rational arithmetic, that you can use in situations as such.
This is not a bug, but a feature )))
you can simple use this trick:
def myround(val):
"Fix pythons round"
d,v = math.modf(val)
if d==0.5:
val += 0.000000001
return round(val)

Division by 3 in Python

I am new to Python and while experimenting with operators, I came across this:
>>> 7.0 / 3
2.3333333333333335
Shouldn't the result be 2.3333333333333333 or maybe 2.3333333333333334. Why is it rounding the number in such a way?
Also, with regard to floor division in Python 2.7 my results were:
>>> 5 / 2
2
>>> 5 // 2
2
>>> 5.0 / 2
2.5
>>> 5.0 // 2
2.0
So my observation is that floor division returns the integer quotient even in case of floating numbers, while normal division return the decimal value. Is this true?
Take a look at this 0.30000000000000004.com
Your language isn't broken, it's doing floating point math. Computers can only natively store integers, so they need some way of representing decimal numbers. This representation comes with some degree of inaccuracy. That's why, more often than not, .1 + .2 != .3.
Shouldn't the result be 2.3333333333333333 or maybe 2.3333333333333334. Why is it rounding the number in such a way?
The key is the number is being rounded twice.
The first rounding is part of the division operation, rounding the number to the nearest double-precision floating point value. This is a binary operation not a decimal one.
The second rounding is part of converting the floating point number to a decimal representation for display. It is possible to represent the exact value of any binary fraction in decimal, but it is usually not desirable as in most applications doing so will simply result in many digits of false-precision. Python instead outputs the shortest decimal approximation that will round-trip to the correct floating point value.
We can better see what is going on by using the Fraction and Decimal types, unlike converting directly to a string converting a floating point number to a Fraction or Decimal will give the exact value. We can also use the Fraction type to determine the error in our calculation.
>>> from fractions import Fraction
>>> from decimal import Decimal
>>> 7.0 / 3
2.3333333333333335
>>> Decimal(7.0 / 3)
Decimal('2.333333333333333481363069950020872056484222412109375')
>>> Fraction(7.0 / 3)
Fraction(5254199565265579, 2251799813685248)
>>> Fraction(7,3) - Fraction(7.0 / 3)
Fraction(-1, 6755399441055744)
The conversion via type Decimal shows us the exact value of the floating point number and demonstrates the many digits of false-precision that typically result from exact conversion of a floating point value to decimal.
The conversion to a Fraction is also interesting, the denominator is 2251799813685248 which is equivalent to 251. This makes perfect sense, a Double precision floating point has 53 effective bits of mantissa and we need two of those for the integral part of the result leaving 51 for the fractional part.
The error in our floating point calculation is 1/6755399441055744 or ⅓ * 2-51. This error is less than half our precision step of 2-51 so the answer was indeed correctly rounded to a double precision floating point value.

What is the default rounding mode of string formatter in Python?

>>> '%f'%0.8407745
'0.840774'
>>> '%f'%0.8407755
'0.840776'
>>> '%f'%-0.8407755
'-0.840776'
>>> '%f'%-0.8407745
'-0.840774'
The results look weird. It is sometimes floor and sometimes ceil.
What is the default rounding mode of string formatter in Python?
Floating point numbers are always an approximation. 0.8407745 is not exactly 0.8407745:
>>> '%.53f' % 0.8407745
'0.84077449999999998020427938172360882163047790527343750'
So the default formatter, rounding to 6 decimals, correctly rounds that value to 0.840774.
0.8407755 on the other hand is:
>>> '%.53f' % 0.8407755
'0.84077550000000000895994389793486334383487701416015625'
and should thus be rounded up.
See The Floating Point Guide for a good introduction as to why that is. (Summary: floating point numbers are represented by the sum of binary fractions).
As per this table here, by default %f will have only 6 digit precision. So, the data is rounded off to 6 digits after the decimal point.
Quoting from the notes section of that table
The alternate form causes the result to always contain a decimal
point, even if no digits follow it.
The precision determines the number of digits after the decimal point
and defaults to 6.

Why is Python's Decimal function defaulting to 54 places?

After inputting
from decimal import *
getcontext().prec = 6
Decimal (1) / Decimal (7)
I get the value
Decimal('0.142857')
However if I enter Decimal (1.0/7) I get
Decimal('0.142857142857142849212692681248881854116916656494140625')
The 1.0 / 7 computes a binary floating point number to 17 digits of precision. This happens before the Decimal constructor sees it:
>>> d = 1.0 / 7
>>> type(d)
<type 'float'>
>>> d.as_integer_ratio()
(2573485501354569, 18014398509481984)
The binary fraction, 2573485501354569 / 18014398509481984 is as close as binary floating point can get using 53 bits of precision. It is not exactly 1/7th, but it's pretty close.
The Decimal constructor then converts the binary fraction to as many places as necessary to get an exact decimal equivalent. The result you're are seeing is what you get when you evaluate 2573485501354569 / 18014398509481984 exactly:
>>> from decimal import Decimal, getcontext
>>> getcontext().prec = 100
>>> Decimal(2573485501354569) / Decimal(18014398509481984)
Decimal('0.142857142857142849212692681248881854116916656494140625')
Learning point 1: Binary floating point computes binary fractions to 53 bits of precision. The result is rounded if necessary.
Learning point 2: The Decimal constructor converts binary floating point numbers to decimals losslessly (no rounding). This tends to result in many more digits of precision than you might expect (See the 6th question in the Decimal FAQ).
Learning point 3: The decimal module is designed to treat all numbers as being exact. Only the results of computations get rounded to the context precision. The binary floating point input is converted to decimal exactly and context precision isn't applied until you do a computation with the number (See the final question and answer in the Decimal FAQ for details).
Executive summary: Don't do binary floating point division before handing the numbers to the decimal module. Let it do the work to your desired precision.
Hope this helps :-)

Why does str() round up floats?

The built-in Python str() function outputs some weird results when passing in floats with many decimals. This is what happens:
>>> str(19.9999999999999999)
>>> '20.0'
I'm expecting to get:
>>> '19.9999999999999999'
Does anyone know why? and maybe workaround it?
Thanks!
It's not str() that rounds, it's the fact that you're using floats in the first place. Float types are fast, but have limited precision; in other words, they are imprecise by design. This applies to all programming languages. For more details on float quirks, please read "What Every Programmer Should Know About Floating-Point Arithmetic"
If you want to store and operate on precise numbers, use the decimal module:
>>> from decimal import Decimal
>>> str(Decimal('19.9999999999999999'))
'19.9999999999999999'
A float has 32 bits (in C at least). One of those bits is allocated for the sign, a few allocated for the mantissa, and a few allocated for the exponent. You can't fit every single decimal to an infinite number of digits into 32 bits. Therefore floating point numbers are heavily based on rounding.
If you try str(19.998), it will probably give you something at least close to 19.998 because 32 bits have enough precision to estimate that, but something like 19.999999999999999 is too precise to estimate in 32 bits, so it rounds to the nearest possible value, which happens to be 20.
Please note that this is a problem of understanding floating point (fixed-length) numbers. Most languages do exactly (or very similar to) what Python does.
Python float is IEEE 754 64-bit binary floating point. It is limited to 53 bits of precision i.e. slightly less than 16 decimal digits of precision. 19.9999999999999999 contains 18 decimal digits; it cannot be represented exactly as a float. float("19.9999999999999999") produces the nearest floating point value, which happens to be the same as float("20.0").
>>> float("19.9999999999999999") == float("20.0")
True
If by "many decimals" you mean "many digits after the decimal point", please be aware that the same "weird" results happen when there are many decimal digits before the decimal point:
>>> float("199999999999999999")
2e+17
If you want the full float precision, don't use str(), use repr():
>>> x = 1. / 3.
>>> str(x)
'0.333333333333'
>>> str(x).count('3')
12
>>> repr(x)
'0.3333333333333333'
>>> repr(x).count('3')
16
>>>
Update It's interesting how often decimal is prescribed as a cure-all for float-induced astonishment. This is often accompanied by simple examples like 0.1 + 0.1 + 0.1 != 0.3. Nobody stops to point out that decimal has its share of deficiencies e.g.
>>> (1.0 / 3.0) * 3.0
1.0
>>> (Decimal('1.0') / Decimal('3.0')) * Decimal('3.0')
Decimal('0.9999999999999999999999999999')
>>>
True, float is limited to 53 binary digits of precision. By default, decimal is limited to 28 decimal digits of precision.
>>> Decimal(2) / Decimal(3)
Decimal('0.6666666666666666666666666667')
>>>
You can change the limit, but it's still limited precision. You still need to know the characteristics of the number format to use it effectively without "astonishing" results, and the extra precision is bought by slower operation (unless you use the 3rd-party cdecimal module).
For any given binary floating point number, there is an infinite set of decimal fractions that, on input, round to that number. Python's str goes to some trouble to produce the shortest decimal fraction from this set; see GLS's paper http://kurtstephens.com/files/p372-steele.pdf for the general algorithm (IIRC they use a refinement that avoids arbitrary-precision math in most cases). You happened to input a decimal fraction that rounds to a float (IEEE double) whose shortest possible decimal fraction is not the same as the one you entered.

Categories