I'm using the modulus operator and I'm get some floating point errors. For example,
>>> 7.2%3
1.2000000000000002
Is my only recourse to handle this by using the round function? E.g.
>>> round(7.2%3, 1)
1.2
I don't a priori know the number of digits I'm going to need to round to, so I'm wondering if there's a better solution?
If you want arbitrary precision, use the decimal module:
>>> import decimal
>>> decimal.Decimal('7.2') % decimal.Decimal('3')
Decimal('1.2')
Please read the documentation carefully.
Notice I used a str as an argument to Decimal. Look what happens if I didn't:
>>> decimal.Decimal(7.2) % decimal.Decimal(3)
Decimal('1.200000000000000177635683940')
>>>
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have an algorithm that is calculating:
result = int(14949283383840498/5262*27115)
The correct result should be 77033412951888085, but Python3.8 gives me 77033412951888080
I also have tried the following:
>>> result = 77033412951888085
>>> print(result)
77033412951888085
>>> print(int(result))
77033412951888085
>>> print(float(result))
7.703341295188808e+16
>>> print(int(float(result)))
77033412951888080
It seems the problem occours when I cast the float to int. What am I missing?
PS: I have found that using result = 14949283383840498//5262*27115 I get the right answer!
Casting is not the issue. Floating-point arithmetic has limitations with respect to precision. See https://docs.python.org/3/tutorial/floatingpoint.html
Need to either use integer division or use the decimal module which defaults to using 28 places in precision.
Using integer division
result = 14949283383840498 // 5262 * 27115
print(result)
Output:
77033412951888085
Using decimal module
from decimal import Decimal
result = Decimal(14949283383840498) / 5262 * 27115
print(result)
Output:
77033412951888085
It is an precision limitation :
result = 14949283383840498/5262*27115
result
7.703341295188808e+16
In this case, result is a float.
You can see that the precision is of 15 digits.
Convert that to int, you see that the last non zero digit is 8, it is correct to what result: float show when printed.
Try the following:
print(sys.float_info.dig)
15
dig is the maximum number of decimal digits that can be faithfully represented in a float.
A very good explanation regarding this issue is available here.
But there are ways to do better with Python, see from the Python's doc:
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a
look at the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project
I don't know how to make it so that when I divide something like 5 / 2, it doesn't just have 1 decimal place after it, for example instead of the answer that it would provide which would be 2.5, I want it to return 2.50. Is there any way to do that without having to import a library? If there isn't a good and efficient way, then could someone point me in the right direction to where I should start reading about how to do this?
You should probably import the decimal module and use that:
import decimal
# you can use a string "2.50"
print(decimal.Decimal("2.50"))
# or you can use a float 2.5
print(decimal.Decimal(2.5).quantize(decimal.Decimal("0.01")))
# if your float has lots of decimal places it will round up (not floor)
print(decimal.Decimal(2.556).quantize(decimal.Decimal("0.01")))
Here are the docs: https://docs.python.org/3.10/library/decimal.html
Also you can get it just as a string using "string formatting":
print('{:.2f}'.format(2.5))
Here are the docs for that: https://docs.python.org/3/library/string.html#format-specification-mini-language
I am trying to convert 123456789.123456789 into 123,456,789.123456789.
Say
In:
f=123456789.123456789
"{:0,f}".format(f)
Out:
'123,456,789.123457'
How do I use format without it automatically rounding off at the millionths place?
Try something like:
>>> '{:,.8f}'.format(f)
'123,456,789.12345679'
To round the number to 8 digits.
Note that for that specific case, floating-point dictates that str(f) => '123456789.12345679', so the longer-precision rounding is yet inevitable unless you use Decimals.
>>> str(1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702)
'1.41421356237'
Is there a way I can make str() record more digits of the number into the string? I don't understand why it truncates by default.
Python's floating point numbers use double precision only, which is 64 bits. They simply cannot represent (significantly) more digits than you're seeing.
If you need more, have a look at the built-in decimal module, or the mpmath package.
Try this:
>>> from decimal import *
>>> Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
The float literal is truncated by default to fit in the space made available for it (i.e. it's not because of str):
>>> 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702
1.4142135623730951
If you need more decimal places use decimal instead.
The Python compiler is truncating; your float literal has more precision than can be represented in a C double. Express the number as a string in the first place if you need more precision.
That's because it's converting to a float. It's not the conversion to the string that's causing it.
You should use decimal.Decimal for representing such high precision numbers.
I have a string:
x = "12.000"
And I want it to convert it to digits. However, I have used int, float, and others but I only get 12.0 and i want to keep all the zeroes. Please help!
I want x = 12.000 as a result.
decimal.Decimal allows you to use a specific precision.
>>> decimal.Decimal('12.000')
Decimal('12.000')
If you really want to perform calculations that take precision into account, the easiest way is to probably to use the uncertainties module. Here is an example
>>> import uncertainties
>>> x = uncertainties.ufloat('12.000')
>>> x
12.0+/-0.001
>>> print 2*x
24.0+/-0.002
The uncertainties module transparently handles uncertainties (precision) for you, whatever the complexity of the mathematical expressions involved.
The decimal module, on the other hand, does not handle uncertainties, but instead sets the number of digits after the decimal point: you can't trust all the digits given by the decimal module. Thus,
>>> 100*decimal.Decimal('12.1')
Decimal('1210.0')
whereas 100*(12.1±0.1) = 1210±10 (not 1210.0±0.1):
>>> 100*uncertainties.ufloat('12.1')
1210.0+/-10.0
Thus, the decimal module gives '1210.0' even though the precision on 100*(12.1±0.1) is 100 times larger than 0.1.
So, if you want numbers that have a fixed number of digits after the decimal point (like for accounting applications), the decimal module is good; if you instead need to perform calculations with uncertainties, then the uncertainties module is appropriate.
(Disclaimer: I'm the author of the uncertainties module.)
You may be interested by the decimal python lib.
You can set the precision with getcontext().prec.