Decimal in Python - python

I am using Python for programming and then Gurobi for solving my optimization problems. As a part of my codes I read the data from a text file (called “Feed2”), then do some calculations on it.
with open('Feed2.txt', 'r') as Fee:
for i in range(C):
Feed= Fee.readline()
for s in L11:
A[i,s]=float(Feed)
for s in L12:
A[i,s] =float(Feed)*1.28
for s in L13:
A[i,s] =float(Feed)*0.95
print A
The result shows some of the numbers have many digits after the decimal (such as 106.51209999999999 or 1029.4144000000001) which crates problem for Gurobi for reading all those which are not really useful digits to me. So, I want to set the number of digits after the decimal to 5 for my entire program, I followed the method explained in https://docs.python.org/3/library/decimal.html (codes are below); but nothing is changed.
from decimal import *
getcontext().prec = 5

The documentation for the decimal module offers an explanation:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem.
When you did:
from decimal import *
getcontext().prec = 5
You only changed to precision used with Decimal objects from the decimal module. You didn't change the precision amount for Python's built-in floating point numbers.
As said in the comments, the behavior you are experiencing is not new. It's simply an side-effect of the way floating point numbers being stored in memory. If you really need the floats to stay a specific precision, use the decimal.Decimal class. e.g.:
>>> from decimal import Decimal
>>> Decimal.from_float(0.1)
Decimal('0.1000000000000000055511151231257827021181583404541015625')
>>> Decimal('0.1')
Decimal('0.1')
>>> Decimal('0.1') / Decimal('0.5')
Decimal('0.2')
If you simply need to round the decimal to a specif precision to display properly, use str.format in the format:
'{:<number of digits before decimal>.<number of digits after decimal >f}'.format(float)
Or with old style formatting:
'%<number of digits before decimal>.<number of digits after decimal >f' % (float)
Recommended reading: What Every Computer Scientist Should Know About Floating-Point Arithmetic.

If you just need to print the numbers with, for example, only two decimals:
print "%.2f" % (A,)
or the newer
print "{0:.2f}".format(A)

Related

Having problems with Decimal library of python [duplicate]

This question already has an answer here:
Why doesn't decimal.getcontext().prec=3 work for decimal.Decimal(1.234)
(1 answer)
Closed 25 days ago.
So I was trying to minimize floating point errors when doing arithmetic in python and I stumbled upon the Decimal module of python. It worked great in the first up until this operation.
from decimal import *
getcontext().prec = 100
test_x = Decimal(str(3.25)).quantize(Decimal('0.000001'), rounding=ROUND_HALF_UP)
test_y = Decimal(str(2196.646351)).quantize(Decimal('0.000001'), rounding=ROUND_HALF_UP)
print((test_y)*(test_x**Decimal('2')))
The above code outputs 23202.077082437500000000 instead of 23202.07708 where it is the output of our usual conventional arithmetic calculator. How can I output it like our calculator with rounding off to 6 decimal places? Also do you have better ways to do arithmetic calculations in python?
I have tried the round() function of the python but that is off limits for me because I am dealing with very large numbers which reaches the maximum length of numbers that the round() function support
Adding further context to the code. I cant change the value of getcontext().prec and the .quantize(Decimal('0.000001')) because I am dealing with numbers like 109796940503037.6545639765 and it is giving me errors if I dont set getcontext().prec to a high number.
I can't change the getcontext().prec to let's say 6 because it always gives the error:
InvalidOperation: [<class 'decimal.InvalidOperation'>]
If you do: Decimal(str(123312.12321221332)) it converts a float to string and the passes it to Decimal and you are losing precision during that conversion.
Do: Decimal('123312.12321221332') instead.
Also keep in mind that:
Unlike hardware based binary floating point, the decimal module has a user alterable precision (defaulting to 28 places) which can be as large as needed for a given problem
https://docs.python.org/3/library/decimal.html

Cast float to int in Python results wrong answer [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have an algorithm that is calculating:
result = int(14949283383840498/5262*27115)
The correct result should be 77033412951888085, but Python3.8 gives me 77033412951888080
I also have tried the following:
>>> result = 77033412951888085
>>> print(result)
77033412951888085
>>> print(int(result))
77033412951888085
>>> print(float(result))
7.703341295188808e+16
>>> print(int(float(result)))
77033412951888080
It seems the problem occours when I cast the float to int. What am I missing?
PS: I have found that using result = 14949283383840498//5262*27115 I get the right answer!
Casting is not the issue. Floating-point arithmetic has limitations with respect to precision. See https://docs.python.org/3/tutorial/floatingpoint.html
Need to either use integer division or use the decimal module which defaults to using 28 places in precision.
Using integer division
result = 14949283383840498 // 5262 * 27115
print(result)
Output:
77033412951888085
Using decimal module
from decimal import Decimal
result = Decimal(14949283383840498) / 5262 * 27115
print(result)
Output:
77033412951888085
It is an precision limitation :
result = 14949283383840498/5262*27115
result
7.703341295188808e+16
In this case, result is a float.
You can see that the precision is of 15 digits.
Convert that to int, you see that the last non zero digit is 8, it is correct to what result: float show when printed.
Try the following:
print(sys.float_info.dig)
15
dig is the maximum number of decimal digits that can be faithfully represented in a float.
A very good explanation regarding this issue is available here.
But there are ways to do better with Python, see from the Python's doc:
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a
look at the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project

Handling very large numbers

I need to write a simple program that calculates a mathematical formula.
The only problem here is that one of the variables can take the value 10^100.
Because of this I can not write this program in C++/C (I can't use external libraries like gmp).
Few hours ago I read that Python is capable of calculating such values.
My question is:
Why
print("%.10f"%(10.25**100))
is returning the number "118137163510621843218803309161687290343217035128100169109374848108012122824436799009169146127891562496.0000000000"
instead of
"118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171461"?
By default, Python uses a fixed precision floating-point data type to represent fractional numbers (just like double in C). You can work with precise rational numbers, though:
>>> from fractions import Fraction
>>> Fraction("10.25")
Fraction(41, 4)
>>> x = Fraction("10.25")
>>> x**100
Fraction(189839102486063226543090986563273122284619337618944664609359292215966165735102377674211649585188827411673346619890309129617784863285653302296666895356073140724001, 1606938044258990275541962092341162602522202993782792835301376)
You can also use the decimal module if you want arbitrary precision decimals (only numbers that are representable as finite decimals are supported, though):
>>> from decimal import *
>>> getcontext().prec = 150
>>> Decimal("10.25")**100
Decimal('118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171460995169093650913476028229144848989')
Python is capable of handling arbitrarily large integers, but not floating point values. They can get pretty large, but as you noticed, you lose precision in the low digits.

More Digits in Irrational Numbers

>>> str(1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702)
'1.41421356237'
Is there a way I can make str() record more digits of the number into the string? I don't understand why it truncates by default.
Python's floating point numbers use double precision only, which is 64 bits. They simply cannot represent (significantly) more digits than you're seeing.
If you need more, have a look at the built-in decimal module, or the mpmath package.
Try this:
>>> from decimal import *
>>> Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
Decimal('1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702')
The float literal is truncated by default to fit in the space made available for it (i.e. it's not because of str):
>>> 1.41421356237309504880168872420969807856967187537694807317667973799073247846210703885038753432764157273501384623091229702
1.4142135623730951
If you need more decimal places use decimal instead.
The Python compiler is truncating; your float literal has more precision than can be represented in a C double. Express the number as a string in the first place if you need more precision.
That's because it's converting to a float. It's not the conversion to the string that's causing it.
You should use decimal.Decimal for representing such high precision numbers.

Why do simple math operations on floating point return unexpected (inaccurate) results in VB.Net and Python?

x = 4.2 - 0.1
vb.net gives 4.1000000000000005
python gives 4.1000000000000005
Excel gives 4.1
Google calc gives 4.1
What is the reason this happens?
Float/double precision.
You must remember that in binary, 4.1 = 4 + 1/10. 1/10 is an infinitely repeating sum in binary, much like 1/9 is an infinite sum in decimal.
>>> x = 4.2 - 0.1
>>> x
4.1000000000000005
>>>>print(x)
4.1
This happens because of how numbers are stored internally.
Computers represent numbers in binary, instead of decimal, as us humans are used to. With floating point numbers, computers have to make an approximation to the closest binary floating point value.
Almost all machines today (November 2000) use IEEE-754 floating point arithmetic, and almost all platforms map Python floats to IEEE-754 “double precision”. 754 doubles contain 53 bits of precision, so on input the computer strives to convert 0.1 to the closest fraction it can of the form J/2***N* where J is an integer containing exactly 53 bits.
If you print the number, it will show the approximation, truncated to a normal value. For example, the real value of 0.1 is 0.1000000000000000055511151231257827021181583404541015625.
If you really need a base 10 based number (if you don't know the answer to this question, you don't), you could use (in Python) decimal.Decimal:
>>> from decimal import Decimal
>>> Decimal("4.2") - Decimal("0.1")
Decimal("4.1")
Binary floating-point arithmetic holds many surprises like this. The problem with “0.1” is explained in precise detail below, in the “Representation Error” section. See The Perils of Floating Point for a more complete account of other common surprises.
As that says near the end, “there are no easy answers.” Still, don’t be unduly wary of floating-point! The errors in Python float operations are inherited from the floating-point hardware, and on most machines are on the order of no more than 1 part in 2**53 per operation. That’s more than adequate for most tasks, but you do need to keep in mind that it’s not decimal arithmetic, and that every float operation can suffer a new rounding error.
While pathological cases do exist, for most casual use of floating-point arithmetic you’ll see the result you expect in the end if you simply round the display of your final results to the number of decimal digits you expect. str() usually suffices, and for finer control see the str.format() method’s format specifiers in Format String Syntax.
There is no problem, really. It is just the way floats work (their internal binary representation). Anyway:
>>> from decimal import Decimal
>>> Decimal('4.2')-Decimal('0.1')
Decimal('4.1')
In vb.net, you can avoid this problem by using Decimal type instead:
Dim x As Decimal = 4.2D - 0.1D
The result is 4.1 .

Categories