Big number division leads to unexact result - python

I am test a solidity program, which deals with very large numbers (1e17).
The formula I'm testing is the following : int(1e17*(100-3)/(100-1))
WolframAlpha and the Solidity language tell me that it's equivalent to 97979797979797979.
The test fails however because Python returns 97979797979797984.
How can I get the right value with Python ?

In this case, the fractions module works well here, since its division between two numbers is exact, in that it preserves the precision of the result.
>>> from fractions import Fraction
>>> int(Fraction(10)**17*(100-3)/(100-1))
97979797979797979
>>>
(Here we avoid notation like 1e17 because it may resolve to an inexact float value.)

Related

Cast float to int in Python results wrong answer [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 1 year ago.
I have an algorithm that is calculating:
result = int(14949283383840498/5262*27115)
The correct result should be 77033412951888085, but Python3.8 gives me 77033412951888080
I also have tried the following:
>>> result = 77033412951888085
>>> print(result)
77033412951888085
>>> print(int(result))
77033412951888085
>>> print(float(result))
7.703341295188808e+16
>>> print(int(float(result)))
77033412951888080
It seems the problem occours when I cast the float to int. What am I missing?
PS: I have found that using result = 14949283383840498//5262*27115 I get the right answer!
Casting is not the issue. Floating-point arithmetic has limitations with respect to precision. See https://docs.python.org/3/tutorial/floatingpoint.html
Need to either use integer division or use the decimal module which defaults to using 28 places in precision.
Using integer division
result = 14949283383840498 // 5262 * 27115
print(result)
Output:
77033412951888085
Using decimal module
from decimal import Decimal
result = Decimal(14949283383840498) / 5262 * 27115
print(result)
Output:
77033412951888085
It is an precision limitation :
result = 14949283383840498/5262*27115
result
7.703341295188808e+16
In this case, result is a float.
You can see that the precision is of 15 digits.
Convert that to int, you see that the last non zero digit is 8, it is correct to what result: float show when printed.
Try the following:
print(sys.float_info.dig)
15
dig is the maximum number of decimal digits that can be faithfully represented in a float.
A very good explanation regarding this issue is available here.
But there are ways to do better with Python, see from the Python's doc:
For use cases which require exact decimal representation, try using
the decimal module which implements decimal arithmetic suitable for
accounting applications and high-precision applications.
Another form of exact arithmetic is supported by the fractions module
which implements arithmetic based on rational numbers (so the numbers
like 1/3 can be represented exactly).
If you are a heavy user of floating point operations you should take a
look at the NumPy package and many other packages for mathematical and
statistical operations supplied by the SciPy project

Get a decimal point result upto n number of precision in python

I have an app that has some complex mathematical operations leading to very small amount in results like 0.000028 etc. Now If I perform some calculation in javascript like (29631/1073741824) it gives me result as 0.000027596019208431244 whereas the same calculation in python with data type float gives me 0.0. How can I increase the number of decimal points in results in python
The problem is not in a lack of decimal points, it's that in Python 2 integer division produces an integer result. This means that 29631/1073741824 yields 0 rather than the float you are expecting. To work around this, you can use a float for either operand:
>>> 29631.0/1073741824
2.7596019208431244e-05
This changed in Python 3, where the division operator does the expected thing. You can use a from __future__ import to turn on the new behavior in Python 2:
>>> from __future__ import division
>>> 29631/1073741824
2.7596019208431244e-05
If you're using Python 3.x, you should try this :
print(str(float(29631)/1073741824))

Sympy returns zero for fractions with real numbers

I'm new to Sympy, and have realized it is quite nice for calculating and simplifying algebraic expressions.
However, when when I write fractions of real numbers it returns zero (no problem with fractions of symbols like 'x'). What am I doing wrong?
from sympy import *
1./2
Out[2]: 0.5
1/2
Out[3]: 0
it's because python (2.7) needs a float in denominator or numerator to return a float.
in python 3.x any division returns a float
You can also 'fix' in python 2.7 that by using :
from __future__ import division
In fact python follow the integer division rules and return an integer and not a float number
1/2 isn't using SymPy at all. This is just evaluated by Python. Take a read of http://docs.sympy.org/latest/tutorial/gotchas.html#two-final-notes-and (I actually recommend reading through the whole SymPy tutorial).
Basically, if you are using SymPy, you probably want a rational number, which you can get with Rational(1, 2). An easier way to type this is S(1)/2. The S function converts 1 into SymPy's Integer(1), which then becomes a Rational when divided by.

Handling very large numbers

I need to write a simple program that calculates a mathematical formula.
The only problem here is that one of the variables can take the value 10^100.
Because of this I can not write this program in C++/C (I can't use external libraries like gmp).
Few hours ago I read that Python is capable of calculating such values.
My question is:
Why
print("%.10f"%(10.25**100))
is returning the number "118137163510621843218803309161687290343217035128100169109374848108012122824436799009169146127891562496.0000000000"
instead of
"118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171461"?
By default, Python uses a fixed precision floating-point data type to represent fractional numbers (just like double in C). You can work with precise rational numbers, though:
>>> from fractions import Fraction
>>> Fraction("10.25")
Fraction(41, 4)
>>> x = Fraction("10.25")
>>> x**100
Fraction(189839102486063226543090986563273122284619337618944664609359292215966165735102377674211649585188827411673346619890309129617784863285653302296666895356073140724001, 1606938044258990275541962092341162602522202993782792835301376)
You can also use the decimal module if you want arbitrary precision decimals (only numbers that are representable as finite decimals are supported, though):
>>> from decimal import *
>>> getcontext().prec = 150
>>> Decimal("10.25")**100
Decimal('118137163510621850716311252946961817841741635398513936935237985161753371506358048089333490072379307296.453937046171460995169093650913476028229144848989')
Python is capable of handling arbitrarily large integers, but not floating point values. They can get pretty large, but as you noticed, you lose precision in the low digits.

Floating Point Concepts in Python

Why Does -22/10 return -3 in python. Any pointers regarding this will be helpful for me.
Because it's integer division by default. And integer division is rounded towards minus infinity. Take a look:
>>> -22/10
-3
>>> -22/10.0
-2.2000000000000002
Positive:
>>> 22/10
2
>>> 22/10.0
2.2000000000000002
Regarding the seeming "inaccuracy" of floating point, this is a great article to read: Why are floating point calculations so inaccurate?
PEP 238, "Changing the Division Operator", explains the issues well, I think. In brief: when Python was designed it adopted the "truncating" meaning for / between integers, simply because most other programming languages did ever since the first FORTRAN compiler was launched in 1957 (all-uppercase language name and all;-). (One widespread language that didn't adopt this meaning, using / to produce a floating point result and div for truncation, was Pascal).
In 2001 it was decided that this choice was not optimal (to quote the PEP, "This makes expressions expecting float or complex results error-prone when integers are not expected but possible as inputs"), and to switch to using a new operator // to request division with truncation, and change the meaning of / to produce a float result ("true division").
You can explicitly request this behavior by putting the statement
from __future__ import division
at the start of a module (the command-line switch -Q to the python interpreter can also control the behavior of division). Missing such an "import from the future" (and command line switch use), Python 2.x, for all values of x, always uses "classic division" (i.e., / is truncating between ints).
Python 3, however, always uses "true division" (/ between ints produces a float).
Note a curious corollary (in Python 3)...:
>>> from fractions import Fraction
>>> Fraction(1/2)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/fractions.py", line 100, in __new__
raise TypeError("argument should be a string "
TypeError: argument should be a string or a Rational instance
since / produces a float, it's not acceptable as the argument to Fraction (otherwise precision might be silently lost). You must use a string, or pass numerator and denominator as separate arguments:
>>> Fraction(1, 2)
Fraction(1, 2)
>>> Fraction('1/2')
Fraction(1, 2)
gmpy uses a different, more tolerant approach to building mpqs, its equivalent of Python 3 Fractions...:
>>> import gmpy
>>> gmpy.mpq(1/2)
mpq(1,2)
Specifically (see lines 3168 and following in the source), gmpy uses a Stern-Brocot tree to get the "best practical approximation" of the floating point argument as a rational (of course, this can mask a loss of precision).
By default, the current versions of Python 2.x (I'm not sure about 3.x) give an integer result for any arithmetic operator when both operands are integers. However, there is a way to change this behaviour.
from __future__ import division
print(22/10)
Outputs
2.2000000000000002
Of course, a simpler way is to simply make one of the operands a float as described by the previous two answers.
Because you're doing an integer division. If you do -22.0/10 instead, you'll get the correct result.
This happens because the operation of integer division returns the number, which when multiplied by the divisor gives the largest possible integer that is no larger than the number you divided.
This is exactly why 22/10 gives 2: 10*2=20, which is the largest integer multiple of 10 not bigger than 20.
When this goes to the negative, your operation becomes -22/10. Your result is -3. Applying the same logic as in the previous case, we see that 10*-3=-30, which is the largest integer multiple of 10 not bigger than -20.
This is why you get a slightly unexpected answer when dealing with negative numbers.
Hope that helps

Categories