This question already has answers here:
How to get largest possible precision? (Python - Decimal)
(4 answers)
Closed 5 months ago.
I have done some research throughout SO and I believe this is not a duplicate of How to get largest possible precision? (Python - Decimal) or Arithmetic precision problems with large numbers or How to store a big floating point number in python variable?
Let's say that I have this number: 11400361308443875328.123123123123
What data type can I use to store this in Python? I have tried float, decimal and the result it gives me:
x = Decimal('11400361308443875328.123123123123123') + Decimal('11400361308443875328.123123123123123')
print("{:12f}".format(x))
# 22800722616887750656.24624625
y = float(11400361308443875328.123123123123) + float(11400361308443875328.123123123123)
print("{:12f}".format(y))
# 22800722616887750656.000000
z = Decimal('0.123123123123123') + Decimal('0.123123123123123')
print("{:12f}".format(z))
# 0.246246246246246
I need the degree of precision that z has. How should I store this big number with floating precision so that I can do some mathematical operations on it? Is there some trick on how I can do this?
For the question on why I need this high degree of precision: this is a question from a coding challenge (not the actual question), and the submission is graded with a leeway of +- 10^-6 precision
If decimal's default precision is not enough, you can change it by modifying value of getcontext().prec - https://docs.python.org/3/library/decimal.html#module-decimal
from decimal import Decimal, getcontext
getcontext().prec = 50
x = Decimal('11400361308443875328.123123123123123') + Decimal('11400361308443875328.123123123123123123')
print(x) # 22800722616887750656.246246246246246123
Related
This question already has an answer here:
Division by 10 for large values of n in python gives inaccurate answers
(1 answer)
Closed 1 year ago.
In Python, while dividing bigger values I am getting inaccurate output, for example:-
(1227073724601519345/101) = 12149244798034845. But in Python it becomes
(1227073724601519345/101) = 1.2149244798034844e+16 which converted to int is 12149244798034844.
As you can see
( correct_output - approx_output ) = 1
Is there any way I can avoid this? There is no such inaccuracy while multiplying even bigger numbers, for example:-
(123468274768408415066917747313280346049^2) - (56 * (16499142225694642619627981620326144780^2)) = 1
which is accurate.
Computer commonly use IEEE 754 Standard for Floating-Point Arithmetic. That means that floating point numbers have a limited precision of 53 bits (about 15 or 16 decimal digits).
As you have used (x / y) Python has given you a floating point result, and has the result would require more than 15 decimal digit it cannot be accurate.
But Python also have an integer division operator (//). Integers in Python 3 are multi-precision numbers, meaning that they can represent arbitrary large integers (limited only by the available memory...). It is the reason why you have accurate result when multiplying large numbers. So to get the exact integer result, you should use this:
1227073724601519345//101
which gives as expected 12149244798034845
Use integer division, e.g., 1227073724601519345 // 101.
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).
This question already has answers here:
How can I force division to be floating point? Division keeps rounding down to 0?
(11 answers)
Closed 9 years ago.
I'd like to pass numbers around between functions, while preserving the decimal places for the numbers.
I've discovered that if I pass a float like '10.00' in to a function, then the decimal places don't get used. This messes an operation like calculating percentages.
For example, x * (10 / 100) will always return 0.
But if I manage to preserve the decimal places, I end up doing x * (10.00 / 100). This returns an accurate result.
I'd like to have a technique that enables consistency when I'm working with numbers that decimal places that can hold zeroes.
When you write
10 / 100
you are performing integer division. That's because both operands are integers. The result is 0.
If you want to perform floating point division, make one of the operands be a floating point value. For instance:
10.0 / 100
or
float(10) / 100
Do beware also that
10.0 / 100
results in a binary floating point value and binary floating data types cannot represent the true result value of 0.1. So if you want to represent the result accurately you may need to use a decimal data type. The decimal module has the functionality needed for that.
Division in python for float and int works differently, take a look at this question and it's answers: Python division.
Moreover, if you are looking for a solution to format a decimal floating point of your figures into string, you might need to use %f.
Python
# '1.000000'
"%f" % (1.0)
# '1.00'
"%.2f" % (1.0)
# ' 1.00'
"%6.2f" % (1.0)
Python 2.x will use integer division when dividing two integers unless you explicitly tell it to do otherwise. Two integers in --> one integer out.
Python 3 onwards will return, to quote PEP 238 http://www.python.org/dev/peps/pep-0238/ a reasonable approximation of the result of the division approximation, i.e. it will perform a floating point division and return the result without rounding.
To enable this behaviour in earlier version of Python you can use:
from __future__ import division
At the very top of the module, this should get you the consistent results you want.
You should use the decimal module. Each number knows how many significant digits it has.
If you're trying to preserve significant digits, the decimal module is has everything you need. Example:
>>> from decimal import Decimal
>>> num = Decimal('10.00')
>>> num
Decimal('10.00')
>>> num / 10
Decimal('1.00')
This question already has answers here:
Floating Point Limitations [duplicate]
(3 answers)
Closed 9 years ago.
I spent an hour today trying to figure out why
return abs(val-desired) <= 0.1
was occasionally returning False, despite val and desired having an absolute difference of <=0.1. After some debugging, I found out that -13.2 + 13.3 = 0.10000000000000142. Now I understand that CPUs cannot easily represent most real numbers, but this is an exception, because you can subtract 0.00000000000000142 and get 0.1, so it can be represented in Python.
I am running Python 2.7 on Intel Core architecture CPUs (this is all I have been able to test it on). I'm curious to know how I can store a value of 0.1 despite not being able to apply arithmetic to particular floating point values. val and desired are float values.
Yes, this can be a bit surprising:
>>> +13.3
13.300000000000001
>>> -13.2
-13.199999999999999
>>> 0.1
0.10000000000000001
All these numbers can be represented with some 16 digits of accuracy. So why:
>>> 13.3-13.2
0.10000000000000142
Why only 14 digits of accuracy in that case?
Well, that's because 13.3 and -13.2 have 16 digits of accuracy, which means 14 decimal points, since there are two digits before the decimal point. So the result also have 14 decimal points of accuracy. Even though the computer can represent numbers with 16 digits.
If we make the numbers bigger, the accuracy of the result decreases further:
>>> 13000.3-13000.2
0.099999999998544808
>>> 1.33E10-13.2E10
-118700000000.0
In short, the accuracy of the result depends on the accuracy of the input.
"Now I understand that CPUs cannot easily represent most floating point numbers with high resolution", the fact you asked this question indicates that you don't understand. None of the real values 13.2, 13.3 nor 0.1 can be represented exactly as floating point numbers:
>>> "{:.20f}".format(13.2)
'13.19999999999999928946'
>>> "{:.20f}".format(13.3)
'13.30000000000000071054'
>>> "{:.20f}".format(0.1)
'0.10000000000000000555'
To directly address your question of "how do I store a value like 0.1 and do an exact comparison to it when I have imprecise floating-point numbers," the answer is to use a different type to represent your numbers. Python has a decimal module for doing decimal fixed-point and floating-point math instead of binary -- in decimal, obviously, 0.1, -13.2, and 13.3 can all be represented exactly instead of approximately; or you can set a specific level of precision when doing calculations using decimal and discard digits below that level of significance.
val = decimal.Decimal(some calculation)
desired = decimal.Decimal(some other calculation)
return abs(val-desired) <= decimal.Decimal('0.1')
The other common alternative is to use integers instead of floats by artificially multiplying by some power of ten.
return not int(abs(val-desired)*10)
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 7 years ago.
I don't know if this is an obvious bug, but while running a Python script for varying the parameters of a simulation, I realized the results with delta = 0.29 and delta = 0.58 were missing. On investigation, I noticed that the following Python code:
for i_delta in range(0, 101, 1):
delta = float(i_delta) / 100
(...)
filename = 'foo' + str(int(delta * 100)) + '.dat'
generated identical files for delta = 0.28 and 0.29, same with .57 and .58, the reason being that python returns float(29)/100 as 0.28999999999999998. But that isn't a systematic error, not in the sense it happens to every integer. So I created the following Python script:
import sys
n = int(sys.argv[1])
for i in range(0, n + 1):
a = int(100 * (float(i) / 100))
if i != a: print i, a
And I can't see any pattern in the numbers for which this rounding error happens. Why does this happen with those particular numbers?
Any number that can't be built from exact powers of two can't be represented exactly as a floating point number; it needs to be approximated. Sometimes the closest approximation will be less than the actual number.
Read What Every Computer Scientist Should Know About Floating-Point Arithmetic.
Its very well known due to the nature of floating point numbers.
If you want to do decimal arithmetic not floating point arithmatic there are libraries to do this.
E.g.,
>>> from decimal import Decimal
>>> Decimal(29)/Decimal(100)
Decimal('0.29')
>>> Decimal('0.29')*100
Decimal('29')
>>> int(Decimal('29'))
29
In general decimal is probably going overboard and still will have rounding errors in rare cases when the number does not have a finite decimal representation (for example any fraction where the denominator is not 1 or divisible by 2 or 5 - the factors of the decimal base (10)). For example:
>>> s = Decimal(7)
>>> Decimal(1)/s/s/s/s/s/s/s*s*s*s*s*s*s*s
Decimal('0.9999999999999999999999999996')
>>> int(Decimal('0.9999999999999999999999999996'))
0
So its best to always just round before casting floating points to ints, unless you want a floor function.
>>> int(1.9999)
1
>>> int(round(1.999))
2
Another alternative is to use the Fraction class from the fractions library which doesn't approximate. (It justs keeps adding/subtracting and multiplying the integer numerators and denominators as necessary).