Python-properly adding fractions [duplicate] - python

This question already has answers here:
adding two fractions in python
(3 answers)
Closed 4 years ago.
I would like to do some calculation with fractions, keeping all the numbers as fractions and never converting to decimals.
For example 1/2+1/4=3/4. We can do this in Python for example by using a function from the fractions class: Fraction(1/2+1/4) or Fraction(1/2)+Fraction(1/4), if we want.
However I can't get Python to give me the correct fraction for 1/2+1/3 because 1/3 is a non-terminating decimal. Fraction(1/2+1/3) doesn't work, nor does Fraction(Fraction(1/2)+Fraction(1/3)).
***EDIT: The reason why I used Fraction(1/2) instead of Fraction(1,2) is because, in my code I know that I'm working only with fractions but I won't know the numerator and denominator of a fraction just the fraction itself.

Sure you can.
>> from fractions import Fraction
>> Fraction(1, 2) + Fraction(1, 3)
Fraction(5, 6)
Your notation seems to be the problem. Notice that:
>>> Fraction(1/3) == Fraction(1, 3)
False
while
>>> Fraction(1/2) == Fraction(1, 2)
True
As a result:
>>> Fraction(1/2) + Fraction(1/3)
Fraction(15011998757901653, 18014398509481984) # almost 5/6 but not quite
That happens because when you type Fraction(1/3) the 1/3 is calculated first and then passed to Fraction. On the contrary, Fraction(1, 3) simply creates a fraction using 1 as the numerator and 3 as the denominator.

You can use the Fraction function in this manner.
from fractions import Fraction
f = Fraction(1,2) + Fraction(1,3)
print (f)
Output: Fraction(5,6)

Related

How to round a percentage values column in python dataframe? [duplicate]

I was just re-reading What’s New In Python 3.0 and it states:
The round() function rounding strategy and return type have changed.
Exact halfway cases are now rounded to the nearest even result instead
of away from zero. (For example, round(2.5) now returns 2 rather than
3.)
and
the documentation for round:
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus n; if two multiples are
equally close, rounding is done toward the even choice
So, under v2.7.3:
In [85]: round(2.5)
Out[85]: 3.0
In [86]: round(3.5)
Out[86]: 4.0
as I'd have expected. However, now under v3.2.3:
In [32]: round(2.5)
Out[32]: 2
In [33]: round(3.5)
Out[33]: 4
This seems counter-intuitive and contrary to what I understand about
rounding (and bound to trip up people). English isn't my native language but
until I read this I thought I knew what rounding meant :-/ I am sure
at the time v3 was introduced there must have been some discussion of
this, but I was unable to find a good reason in my search.
Does anyone have insight into why this was changed to this?
Are there any other mainstream programming languages (e.g., C, C++, Java, Perl, ..) that do this sort of (to me inconsistent) rounding?
What am I missing here?
UPDATE: #Li-aungYip's comment re "Banker's rounding" gave me the right search term/keywords to search for and I found this SO question: Why does .NET use banker's rounding as default?, so I will be reading that carefully.
Python 3's way (called "round half to even" or "banker's rounding") is considered the standard rounding method these days, though some language implementations aren't on the bus yet.
The simple "always round 0.5 up" technique results in a slight bias toward the higher number. With large numbers of calculations, this can be significant. The Python 3.0 approach eliminates this issue.
There is more than one method of rounding in common use. IEEE 754, the international standard for floating-point math, defines five different rounding methods (the one used by Python 3.0 is the default). And there are others.
This behavior is not as widely known as it ought to be. AppleScript was, if I remember correctly, an early adopter of this rounding method. The round command in AppleScript offers several options, but round-toward-even is the default as it is in IEEE 754. Apparently the engineer who implemented the round command got so fed up with all the requests to "make it work like I learned in school" that he implemented just that: round 2.5 rounding as taught in school is a valid AppleScript command. :-)
You can control the rounding you get in Py3000 using the Decimal module:
>>> decimal.Decimal('3.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_UP)
>>> Decimal('4')
>>> decimal.Decimal('2.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_EVEN)
>>> Decimal('2')
>>> decimal.Decimal('3.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_DOWN)
>>> Decimal('3')
Just to add here an important note from documentation:
https://docs.python.org/dev/library/functions.html#round
Note
The behavior of round() for floats can be surprising: for example,
round(2.675, 2) gives 2.67 instead of the expected 2.68. This is not a
bug: it’s a result of the fact that most decimal fractions can’t be
represented exactly as a float. See Floating Point Arithmetic: Issues
and Limitations for more information.
So don't be surprised to get following results in Python 3.2:
>>> round(0.25,1), round(0.35,1), round(0.45,1), round(0.55,1)
(0.2, 0.3, 0.5, 0.6)
>>> round(0.025,2), round(0.035,2), round(0.045,2), round(0.055,2)
(0.03, 0.04, 0.04, 0.06)
Python 3.x rounds .5 values to a neighbour which is even
assert round(0.5) == 0
assert round(1.5) == 2
assert round(2.5) == 2
import decimal
assert decimal.Decimal('0.5').to_integral_value() == 0
assert decimal.Decimal('1.5').to_integral_value() == 2
assert decimal.Decimal('2.5').to_integral_value() == 2
however, one can change decimal rounding "back" to always round .5 up, if needed :
decimal.getcontext().rounding = decimal.ROUND_HALF_UP
assert decimal.Decimal('0.5').to_integral_value() == 1
assert decimal.Decimal('1.5').to_integral_value() == 2
assert decimal.Decimal('2.5').to_integral_value() == 3
i = int(decimal.Decimal('2.5').to_integral_value()) # to get an int
assert i == 3
assert type(i) is int
I recently had problems with this, too. Hence, I have developed a python 3 module that has 2 functions trueround() and trueround_precision() that address this and give the same rounding behaviour were are used to from primary school (not banker's rounding). Here is the module. Just save the code and copy it in or import it. Note: the trueround_precision module can change the rounding behaviour depending on needs according to the ROUND_CEILING, ROUND_DOWN, ROUND_FLOOR, ROUND_HALF_DOWN, ROUND_HALF_EVEN, ROUND_HALF_UP, ROUND_UP, and ROUND_05UP flags in the decimal module (see that modules documentation for more info). For the functions below, see the docstrings or use help(trueround) and help(trueround_precision) if copied into an interpreter for further documentation.
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
def trueround(number, places=0):
'''
trueround(number, places)
example:
>>> trueround(2.55, 1) == 2.6
True
uses standard functions with no import to give "normal" behavior to
rounding so that trueround(2.5) == 3, trueround(3.5) == 4,
trueround(4.5) == 5, etc. Use with caution, however. This still has
the same problem with floating point math. The return object will
be type int if places=0 or a float if places=>1.
number is the floating point number needed rounding
places is the number of decimal places to round to with '0' as the
default which will actually return our interger. Otherwise, a
floating point will be returned to the given decimal place.
Note: Use trueround_precision() if true precision with
floats is needed
GPL 2.0
copywrite by Narnie Harshoe <signupnarnie#gmail.com>
'''
place = 10**(places)
rounded = (int(number*place + 0.5if number>=0 else -0.5))/place
if rounded == int(rounded):
rounded = int(rounded)
return rounded
def trueround_precision(number, places=0, rounding=None):
'''
trueround_precision(number, places, rounding=ROUND_HALF_UP)
Uses true precision for floating numbers using the 'decimal' module in
python and assumes the module has already been imported before calling
this function. The return object is of type Decimal.
All rounding options are available from the decimal module including
ROUND_CEILING, ROUND_DOWN, ROUND_FLOOR, ROUND_HALF_DOWN, ROUND_HALF_EVEN,
ROUND_HALF_UP, ROUND_UP, and ROUND_05UP.
examples:
>>> trueround(2.5, 0) == Decimal('3')
True
>>> trueround(2.5, 0, ROUND_DOWN) == Decimal('2')
True
number is a floating point number or a string type containing a number on
on which to be acted.
places is the number of decimal places to round to with '0' as the default.
Note: if type float is passed as the first argument to the function, it
will first be converted to a str type for correct rounding.
GPL 2.0
copywrite by Narnie Harshoe <signupnarnie#gmail.com>
'''
from decimal import Decimal as dec
from decimal import ROUND_HALF_UP
from decimal import ROUND_CEILING
from decimal import ROUND_DOWN
from decimal import ROUND_FLOOR
from decimal import ROUND_HALF_DOWN
from decimal import ROUND_HALF_EVEN
from decimal import ROUND_UP
from decimal import ROUND_05UP
if type(number) == type(float()):
number = str(number)
if rounding == None:
rounding = ROUND_HALF_UP
place = '1.'
for i in range(places):
place = ''.join([place, '0'])
return dec(number).quantize(dec(place), rounding=rounding)
Hope this helps,
Narnie
Python 2 rounding behaviour in python 3.
Adding 1 at the 15th decimal places.
Accuracy upto 15 digits.
round2=lambda x,y=None: round(x+1e-15,y)
Not right for 175.57. For that it should be added in the 13th decimal place as the number is grown. Switching to Decimal is better than reinventing the same wheel.
from decimal import Decimal, ROUND_HALF_UP
def round2(x, y=2):
prec = Decimal(10) ** -y
return float(Decimal(str(round(x,3))).quantize(prec, rounding=ROUND_HALF_UP))
Not used y
Some cases:
in: Decimal(75.29 / 2).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
in: round(75.29 / 2, 2)
out: 37.65 GOOD
in: Decimal(85.55 / 2).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
in: round(85.55 / 2, 2)
out: 42.77 BAD
For fix:
in: round(75.29 / 2 + 0.00001, 2)
out: 37.65 GOOD
in: round(85.55 / 2 + 0.00001, 2)
out: 42.78 GOOD
If you want more decimals, for example 4, you should add (+ 0.0000001).
Work for me.
Sample Reproduction:
['{} => {}'.format(x+0.5, round(x+0.5)) for x in range(10)]
['0.5 => 0', '1.5 => 2', '2.5 => 2', '3.5 => 4', '4.5 => 4', '5.5 => 6', '6.5 => 6', '7.5 => 8', '8.5 => 8', '9.5 => 10']
API: https://docs.python.org/3/library/functions.html#round
States:
Return number rounded to ndigits precision after the decimal point. If
ndigits is omitted or is None, it returns the nearest integer to its
input.
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus ndigits; if two multiples
are equally close, rounding is done toward the even choice (so, for
example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
Any integer value is valid for ndigits (positive, zero, or negative).
The return value is an integer if ndigits is omitted or None.
Otherwise the return value has the same type as number.
For a general Python object number, round delegates to
number.round.
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float. See Floating Point
Arithmetic: Issues and Limitations for more information.
Given this insight you can use some math to resolve it
import math
def my_round(i):
f = math.floor(i)
return f if i - f < 0.5 else f+1
now you can run the same test with my_round instead of round.
['{} => {}'.format(x + 0.5, my_round(x+0.5)) for x in range(10)]
['0.5 => 1', '1.5 => 2', '2.5 => 3', '3.5 => 4', '4.5 => 5', '5.5 => 6', '6.5 => 7', '7.5 => 8', '8.5 => 9', '9.5 => 10']
I propose custom function which would work for a DataFrame:
def dfCustomRound(df, dec):
d = 1 / 10 ** dec
df = round(df, dec + 2)
return (((df % (1 * d)) == 0.5 * d).astype(int) * 0.1 * d * np.sign(df) + df).round(dec)
# round module within numpy when decimal is X.5 will give desired (X+1)
import numpy as np
example_of_some_variable = 3.5
rounded_result_of_variable = np.round(example_of_some_variable,0)
print (rounded_result_of_variable)
Try this code:
def roundup(input):
demo = input if str(input)[-1] != "5" else str(input).replace("5","6")
place = len(demo.split(".")[1])-1
return(round(float(demo),place))
The result will be:
>>> x = roundup(2.5)
>>> x
3.0
>>> x = roundup(2.05)
>>> x
2.1
>>> x = roundup(2.005)
>>> x
2.01
Ooutput you can check here:
https://i.stack.imgur.com/QQUkS.png
The easiest way to round in Python 3.x as taught in school is using an auxiliary variable:
n = 0.1
round(2.5 + n)
And these will be the results of the series 2.0 to 3.0 (in 0.1 steps):
>>> round(2 + n)
>>> 2
>>> round(2.1 + n)
>>> 2
>>> round(2.2 + n)
>>> 2
>>> round(2.3 + n)
>>> 2
>>> round(2.4 + n)
>>> 2
>>> round(2.5 + n)
>>> 3
>>> round(2.6 + n)
>>> 3
>>> round(2.7 + n)
>>> 3
>>> round(2.8 + n)
>>> 3
>>> round(2.9 + n)
>>> 3
>>> round(3 + n)
>>> 3
You can control the rounding you using the math.ceil module:
import math
print(math.ceil(2.5))
> 3

Determine the value of a Fraction is integer or float in python

I come across the Fraction module in python. I have problems to determine the result of an integer producting the fraction.
For example, the fraction is f = Fraction(1, 3) and when number n = 2 product with it, the result is Fraction(2, 3) = 0.666666666.... When n=3, the result shoude be Farction(3, 3)=1.
My question is how to distinguish Fraction(3, 3) with Fraction(1,3). That's to say the value of a fraction is integer or float? Thanks!
If f.denominator == 1, the number is an integer. Fraction automatically simplifies the number, so Fraction(3, 3) == Fraction(1, 1).

Clarification on the Decimal type in Python

Everybody knows, or at least, every programmer should know, that using the float type could lead to precision errors. However, in some cases, an exact solution would be great and there are cases where comparing using an epsilon value is not enough. Anyway, that's not really the point.
I knew about the Decimal type in Python but never tried to use it. It states that "Decimal numbers can be represented exactly" and I thought that it meant a clever implementation that allows to represent any real number. My first try was:
>>> from decimal import Decimal
>>> d = Decimal(1) / Decimal(3)
>>> d3 = d * Decimal(3)
>>> d3 < Decimal(1)
True
Quite disappointed, I went back to the documentation and kept reading:
The context for arithmetic is an environment specifying precision [...]
OK, so there is actually a precision. And the classic issues can be reproduced:
>>> dd = d * 10**20
>>> dd
Decimal('33333333333333333333.33333333')
>>> for i in range(10000):
... dd += 1 / Decimal(10**10)
>>> dd
Decimal('33333333333333333333.33333333')
So, my question is: is there a way to have a Decimal type with an infinite precision? If not, what's the more elegant way of comparing 2 decimal numbers (e.g. d3 < 1 should return False if the delta is less than the precision).
Currently, when I only do divisions and multiplications, I use the Fraction type:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
Is it the best approach? What could be the other options?
The Decimal class is best for financial type addition, subtraction multiplication, division type problems:
>>> (1.1+2.2-3.3)*10000000000000000000
4440.892098500626 # relevant for government invoices...
>>> import decimal
>>> D=decimal.Decimal
>>> (D('1.1')+D('2.2')-D('3.3'))*10000000000000000000
Decimal('0.0')
The Fraction module works well with the rational number problem domain you describe:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
For pure multi precision floating point for scientific work, consider mpmath.
If your problem can be held to the symbolic realm, consider sympy. Here is how you would handle the 1/3 issue:
>>> sympy.sympify('1/3')*3
1
>>> (sympy.sympify('1/3')*3) == 1
True
Sympy uses mpmath for arbitrary precision floating point, includes the ability to handle rational numbers and irrational numbers symbolically.
Consider the pure floating point representation of the irrational value of √2:
>>> math.sqrt(2)
1.4142135623730951
>>> math.sqrt(2)*math.sqrt(2)
2.0000000000000004
>>> math.sqrt(2)*math.sqrt(2)==2
False
Compare to sympy:
>>> sympy.sqrt(2)
sqrt(2) # treated symbolically
>>> sympy.sqrt(2)*sympy.sqrt(2)==2
True
You can also reduce values:
>>> import sympy
>>> sympy.sqrt(8)
2*sqrt(2) # √8 == √(4 x 2) == 2*√2...
However, you can see issues with Sympy similar to straight floating point if not careful:
>>> 1.1+2.2-3.3
4.440892098500626e-16
>>> sympy.sympify('1.1+2.2-3.3')
4.44089209850063e-16 # :-(
This is better done with Decimal:
>>> D('1.1')+D('2.2')-D('3.3')
Decimal('0.0')
Or using Fractions or Sympy and keeping values such as 1.1 as ratios:
>>> sympy.sympify('11/10+22/10-33/10')==0
True
>>> Fraction('1.1')+Fraction('2.2')-Fraction('3.3')==0
True
Or use Rational in sympy:
>>> frac=sympy.Rational
>>> frac('1.1')+frac('2.2')-frac('3.3')==0
True
>>> frac('1/3')*3
1
You can play with sympy live.
So, my question is: is there a way to have a Decimal type with an infinite precision?
No, since storing an irrational number would require infinite memory.
Where Decimal is useful is representing things like monetary amounts, where the values need to be exact and the precision is known a priori.
From the question, it is not entirely clear that Decimal is more appropriate for your use case than float.
is there a way to have a Decimal type with an infinite precision?
No; for any non-empty interval on the real line, you cannot represent all the numbers in the set with infinite precision using a finite number of bits. This is why Fraction is useful, as it stores the numerator and denominator as integers, which can be represented precisely:
>>> Fraction("1.25")
Fraction(5, 4)
If you are new to Decimal, this post is relevant: Python floating point arbitrary precision available?
The essential idea from the answers and comments is that for computationally tough problems where precision is needed, you should use the mpmath module https://code.google.com/p/mpmath/. An important observation is that,
The problem with using Decimal numbers is that you can't do much in the way of math functions on Decimal objects
Just to point out something that might not be immediately obvious to everyone:
The documentation for the decimal module says
... The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero.
(Also see the classic: Is floating point math broken?)
However, if we use decimal.Decimal naively, we get the same "unexpected" result
>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) == Decimal(0.3)
False
The problem in the naive example above is the use of float arguments, which are "losslessly converted to [their] exact decimal equivalent," as explained in the docs.
The trick (implicit in the accepted answer) is to construct the Decimal instances using e.g. strings, instead of floats
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') == Decimal('0.3')
True
or, perhaps more convenient in some cases, using tuples (<sign>, <digits>, <exponent>)
>>> Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) == Decimal((0, (3,), -1))
True
Note: this does not answer the original question, but it is closely related, and may be of help to people who end up here based on the question title.

Convert fraction to decimal in Python

I want to convert 1/2 in python so that when i say print x (where x = 1/2) it returns 0.5
I am looking for the most basic way of doing this, without using any split functions, loops or maps
I have tried float(1/2) but I get 0...
can someone explain me why and how to fix it?
Is it possible to do this without modifying the variable x= 1/2 ?
In python 3.x any division returns a float;
>>> 1/2
0.5
To achieve that in python 2.x, you have to force float conversion:
>>> 1.0/2
0.5
Or to import the division from the "future"
>>> from __future__ import division
>>> 1/2
0.5
An extra: there is no built-in fraction type, but there is one in Python's standard library:
>>> from fractions import Fraction
>>> a = Fraction(1, 2) #or Fraction('1/2')
>>> a
Fraction(1, 2)
>>> print a
1/2
>>> float(a)
0.5
and so on...
If the input is a string,then you could use Fraction directly on the input:
from fractions import Fraction
x='1/2'
x=Fraction(x) #change the type of x from string to Fraction
x=float(x) #change the type of x from Fraction to float
print x
You're probably using Python 2. You can "fix" division by using:
from __future__ import division
at the start of your script (before any other imports). By default in Python 2, the / operator performs integer division when using integer operands, which discards fractional parts of the result.
This has been changed in Python 3 so that / is always floating point division. The new // operator performs integer division.
Alternatively, you can force floating point division by specifying a decimal or by multiplying by 1.0. For instance (from inside the python interpreter):
>>> print 1/2
0
>>> print 1./2
0.5
>>> x = 1/2
>>> print x
0
>>> x = 1./2
>>> print x
0.5
>>> x = 1.0 * 1/2
>>> print x
0.5
EDIT: Looks like I was beaten to the punch in the time it took to type up my response :)
There is no quantity 1/2 anywhere. Python does not represent rational numbers with a built-in type - just integers and floating-point numbers. 1 is divided by 2 - following the integer division rules - resulting in 0. float(0) is 0.

How Python calculate number? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
python - decimal place issues with floats
In [4]: 52+121.2
Out[4]: 173.19999999999999
Short answer: Python uses binary arithmetic for floating-point numbers, not decimal arithmetic. Decimal fractions are not exactly representable in binary.
Long answer: What Every Computer Scientist Should Know About Floating-Point Arithmetic
If you're familiar with the idea that the number "thirteen point two" is written in base ten as "13.2" because it's "10^1 * 1 + 10^0 * 3 + 10^-1 * 2" then try to do the same thing with a base of 2 instead of 10 for the number 173.2.
Here's the whole part:
(1 * 2^7) + (0 * 2^6) + (1 * 2^5) + (0 * 2^4) + (1 * 2^3) + (1 * 2^2) + (0 * 2^1) + (0 * 2^0)
Now here's the start fractional part:
(0 * 2^-1) + (0 * 2^-2) + (1 * 2^-3)
That's .125, which isn't yet 2/10ths so we need more additions that are of the form (1 * 2^-n), we can carry this out a bit further with (1 * 2^-4) + (1 * 2^-7), which gets us a bit closer ... to 0.1953125, but no matter how long we do this, we'll never get to ".2" because ".2" is not representable as a addition of sums of numbers of the form (1 * 2^-n).
Also see .9999… = 1.0 (http://en.wikipedia.org/wiki/0.999...)
Try this:
>>> from decimal import Decimal
>>> Decimal("52") + Decimal("121.2")
Decimal("173.2")
The other answers, pointing to good floating-point resources, are where to start. If you understand floating point roundoff errors, however, and just want your numbers to look prettier and not include a dozen extra digits for that extra last bit of precision, try using str() to print the number rather than repr():
>>> 52+121.2
173.19999999999999
>>> str(52+121.2)
'173.2'

Categories