Related
I was just re-reading What’s New In Python 3.0 and it states:
The round() function rounding strategy and return type have changed.
Exact halfway cases are now rounded to the nearest even result instead
of away from zero. (For example, round(2.5) now returns 2 rather than
3.)
and
the documentation for round:
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus n; if two multiples are
equally close, rounding is done toward the even choice
So, under v2.7.3:
In [85]: round(2.5)
Out[85]: 3.0
In [86]: round(3.5)
Out[86]: 4.0
as I'd have expected. However, now under v3.2.3:
In [32]: round(2.5)
Out[32]: 2
In [33]: round(3.5)
Out[33]: 4
This seems counter-intuitive and contrary to what I understand about
rounding (and bound to trip up people). English isn't my native language but
until I read this I thought I knew what rounding meant :-/ I am sure
at the time v3 was introduced there must have been some discussion of
this, but I was unable to find a good reason in my search.
Does anyone have insight into why this was changed to this?
Are there any other mainstream programming languages (e.g., C, C++, Java, Perl, ..) that do this sort of (to me inconsistent) rounding?
What am I missing here?
UPDATE: #Li-aungYip's comment re "Banker's rounding" gave me the right search term/keywords to search for and I found this SO question: Why does .NET use banker's rounding as default?, so I will be reading that carefully.
Python 3's way (called "round half to even" or "banker's rounding") is considered the standard rounding method these days, though some language implementations aren't on the bus yet.
The simple "always round 0.5 up" technique results in a slight bias toward the higher number. With large numbers of calculations, this can be significant. The Python 3.0 approach eliminates this issue.
There is more than one method of rounding in common use. IEEE 754, the international standard for floating-point math, defines five different rounding methods (the one used by Python 3.0 is the default). And there are others.
This behavior is not as widely known as it ought to be. AppleScript was, if I remember correctly, an early adopter of this rounding method. The round command in AppleScript offers several options, but round-toward-even is the default as it is in IEEE 754. Apparently the engineer who implemented the round command got so fed up with all the requests to "make it work like I learned in school" that he implemented just that: round 2.5 rounding as taught in school is a valid AppleScript command. :-)
You can control the rounding you get in Py3000 using the Decimal module:
>>> decimal.Decimal('3.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_UP)
>>> Decimal('4')
>>> decimal.Decimal('2.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_EVEN)
>>> Decimal('2')
>>> decimal.Decimal('3.5').quantize(decimal.Decimal('1'),
rounding=decimal.ROUND_HALF_DOWN)
>>> Decimal('3')
Just to add here an important note from documentation:
https://docs.python.org/dev/library/functions.html#round
Note
The behavior of round() for floats can be surprising: for example,
round(2.675, 2) gives 2.67 instead of the expected 2.68. This is not a
bug: it’s a result of the fact that most decimal fractions can’t be
represented exactly as a float. See Floating Point Arithmetic: Issues
and Limitations for more information.
So don't be surprised to get following results in Python 3.2:
>>> round(0.25,1), round(0.35,1), round(0.45,1), round(0.55,1)
(0.2, 0.3, 0.5, 0.6)
>>> round(0.025,2), round(0.035,2), round(0.045,2), round(0.055,2)
(0.03, 0.04, 0.04, 0.06)
Python 3.x rounds .5 values to a neighbour which is even
assert round(0.5) == 0
assert round(1.5) == 2
assert round(2.5) == 2
import decimal
assert decimal.Decimal('0.5').to_integral_value() == 0
assert decimal.Decimal('1.5').to_integral_value() == 2
assert decimal.Decimal('2.5').to_integral_value() == 2
however, one can change decimal rounding "back" to always round .5 up, if needed :
decimal.getcontext().rounding = decimal.ROUND_HALF_UP
assert decimal.Decimal('0.5').to_integral_value() == 1
assert decimal.Decimal('1.5').to_integral_value() == 2
assert decimal.Decimal('2.5').to_integral_value() == 3
i = int(decimal.Decimal('2.5').to_integral_value()) # to get an int
assert i == 3
assert type(i) is int
I recently had problems with this, too. Hence, I have developed a python 3 module that has 2 functions trueround() and trueround_precision() that address this and give the same rounding behaviour were are used to from primary school (not banker's rounding). Here is the module. Just save the code and copy it in or import it. Note: the trueround_precision module can change the rounding behaviour depending on needs according to the ROUND_CEILING, ROUND_DOWN, ROUND_FLOOR, ROUND_HALF_DOWN, ROUND_HALF_EVEN, ROUND_HALF_UP, ROUND_UP, and ROUND_05UP flags in the decimal module (see that modules documentation for more info). For the functions below, see the docstrings or use help(trueround) and help(trueround_precision) if copied into an interpreter for further documentation.
#! /usr/bin/env python3
# -*- coding: utf-8 -*-
def trueround(number, places=0):
'''
trueround(number, places)
example:
>>> trueround(2.55, 1) == 2.6
True
uses standard functions with no import to give "normal" behavior to
rounding so that trueround(2.5) == 3, trueround(3.5) == 4,
trueround(4.5) == 5, etc. Use with caution, however. This still has
the same problem with floating point math. The return object will
be type int if places=0 or a float if places=>1.
number is the floating point number needed rounding
places is the number of decimal places to round to with '0' as the
default which will actually return our interger. Otherwise, a
floating point will be returned to the given decimal place.
Note: Use trueround_precision() if true precision with
floats is needed
GPL 2.0
copywrite by Narnie Harshoe <signupnarnie#gmail.com>
'''
place = 10**(places)
rounded = (int(number*place + 0.5if number>=0 else -0.5))/place
if rounded == int(rounded):
rounded = int(rounded)
return rounded
def trueround_precision(number, places=0, rounding=None):
'''
trueround_precision(number, places, rounding=ROUND_HALF_UP)
Uses true precision for floating numbers using the 'decimal' module in
python and assumes the module has already been imported before calling
this function. The return object is of type Decimal.
All rounding options are available from the decimal module including
ROUND_CEILING, ROUND_DOWN, ROUND_FLOOR, ROUND_HALF_DOWN, ROUND_HALF_EVEN,
ROUND_HALF_UP, ROUND_UP, and ROUND_05UP.
examples:
>>> trueround(2.5, 0) == Decimal('3')
True
>>> trueround(2.5, 0, ROUND_DOWN) == Decimal('2')
True
number is a floating point number or a string type containing a number on
on which to be acted.
places is the number of decimal places to round to with '0' as the default.
Note: if type float is passed as the first argument to the function, it
will first be converted to a str type for correct rounding.
GPL 2.0
copywrite by Narnie Harshoe <signupnarnie#gmail.com>
'''
from decimal import Decimal as dec
from decimal import ROUND_HALF_UP
from decimal import ROUND_CEILING
from decimal import ROUND_DOWN
from decimal import ROUND_FLOOR
from decimal import ROUND_HALF_DOWN
from decimal import ROUND_HALF_EVEN
from decimal import ROUND_UP
from decimal import ROUND_05UP
if type(number) == type(float()):
number = str(number)
if rounding == None:
rounding = ROUND_HALF_UP
place = '1.'
for i in range(places):
place = ''.join([place, '0'])
return dec(number).quantize(dec(place), rounding=rounding)
Hope this helps,
Narnie
Python 2 rounding behaviour in python 3.
Adding 1 at the 15th decimal places.
Accuracy upto 15 digits.
round2=lambda x,y=None: round(x+1e-15,y)
Not right for 175.57. For that it should be added in the 13th decimal place as the number is grown. Switching to Decimal is better than reinventing the same wheel.
from decimal import Decimal, ROUND_HALF_UP
def round2(x, y=2):
prec = Decimal(10) ** -y
return float(Decimal(str(round(x,3))).quantize(prec, rounding=ROUND_HALF_UP))
Not used y
Some cases:
in: Decimal(75.29 / 2).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
in: round(75.29 / 2, 2)
out: 37.65 GOOD
in: Decimal(85.55 / 2).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
in: round(85.55 / 2, 2)
out: 42.77 BAD
For fix:
in: round(75.29 / 2 + 0.00001, 2)
out: 37.65 GOOD
in: round(85.55 / 2 + 0.00001, 2)
out: 42.78 GOOD
If you want more decimals, for example 4, you should add (+ 0.0000001).
Work for me.
Sample Reproduction:
['{} => {}'.format(x+0.5, round(x+0.5)) for x in range(10)]
['0.5 => 0', '1.5 => 2', '2.5 => 2', '3.5 => 4', '4.5 => 4', '5.5 => 6', '6.5 => 6', '7.5 => 8', '8.5 => 8', '9.5 => 10']
API: https://docs.python.org/3/library/functions.html#round
States:
Return number rounded to ndigits precision after the decimal point. If
ndigits is omitted or is None, it returns the nearest integer to its
input.
For the built-in types supporting round(), values are rounded to the
closest multiple of 10 to the power minus ndigits; if two multiples
are equally close, rounding is done toward the even choice (so, for
example, both round(0.5) and round(-0.5) are 0, and round(1.5) is 2).
Any integer value is valid for ndigits (positive, zero, or negative).
The return value is an integer if ndigits is omitted or None.
Otherwise the return value has the same type as number.
For a general Python object number, round delegates to
number.round.
Note The behavior of round() for floats can be surprising: for
example, round(2.675, 2) gives 2.67 instead of the expected 2.68. This
is not a bug: it’s a result of the fact that most decimal fractions
can’t be represented exactly as a float. See Floating Point
Arithmetic: Issues and Limitations for more information.
Given this insight you can use some math to resolve it
import math
def my_round(i):
f = math.floor(i)
return f if i - f < 0.5 else f+1
now you can run the same test with my_round instead of round.
['{} => {}'.format(x + 0.5, my_round(x+0.5)) for x in range(10)]
['0.5 => 1', '1.5 => 2', '2.5 => 3', '3.5 => 4', '4.5 => 5', '5.5 => 6', '6.5 => 7', '7.5 => 8', '8.5 => 9', '9.5 => 10']
I propose custom function which would work for a DataFrame:
def dfCustomRound(df, dec):
d = 1 / 10 ** dec
df = round(df, dec + 2)
return (((df % (1 * d)) == 0.5 * d).astype(int) * 0.1 * d * np.sign(df) + df).round(dec)
# round module within numpy when decimal is X.5 will give desired (X+1)
import numpy as np
example_of_some_variable = 3.5
rounded_result_of_variable = np.round(example_of_some_variable,0)
print (rounded_result_of_variable)
Try this code:
def roundup(input):
demo = input if str(input)[-1] != "5" else str(input).replace("5","6")
place = len(demo.split(".")[1])-1
return(round(float(demo),place))
The result will be:
>>> x = roundup(2.5)
>>> x
3.0
>>> x = roundup(2.05)
>>> x
2.1
>>> x = roundup(2.005)
>>> x
2.01
Ooutput you can check here:
https://i.stack.imgur.com/QQUkS.png
The easiest way to round in Python 3.x as taught in school is using an auxiliary variable:
n = 0.1
round(2.5 + n)
And these will be the results of the series 2.0 to 3.0 (in 0.1 steps):
>>> round(2 + n)
>>> 2
>>> round(2.1 + n)
>>> 2
>>> round(2.2 + n)
>>> 2
>>> round(2.3 + n)
>>> 2
>>> round(2.4 + n)
>>> 2
>>> round(2.5 + n)
>>> 3
>>> round(2.6 + n)
>>> 3
>>> round(2.7 + n)
>>> 3
>>> round(2.8 + n)
>>> 3
>>> round(2.9 + n)
>>> 3
>>> round(3 + n)
>>> 3
You can control the rounding you using the math.ceil module:
import math
print(math.ceil(2.5))
> 3
I want to remove digits from a float to have a fixed number of digits after the dot, like:
1.923328437452 → 1.923
I need to output as a string to another function, not print.
Also I want to ignore the lost digits, not round them.
round(1.923328437452, 3)
See Python's documentation on the standard types. You'll need to scroll down a bit to get to the round function. Essentially the second number says how many decimal places to round it to.
First, the function, for those who just want some copy-and-paste code:
def truncate(f, n):
'''Truncates/pads a float f to n decimal places without rounding'''
s = '{}'.format(f)
if 'e' in s or 'E' in s:
return '{0:.{1}f}'.format(f, n)
i, p, d = s.partition('.')
return '.'.join([i, (d+'0'*n)[:n]])
This is valid in Python 2.7 and 3.1+. For older versions, it's not possible to get the same "intelligent rounding" effect (at least, not without a lot of complicated code), but rounding to 12 decimal places before truncation will work much of the time:
def truncate(f, n):
'''Truncates/pads a float f to n decimal places without rounding'''
s = '%.12f' % f
i, p, d = s.partition('.')
return '.'.join([i, (d+'0'*n)[:n]])
Explanation
The core of the underlying method is to convert the value to a string at full precision and then just chop off everything beyond the desired number of characters. The latter step is easy; it can be done either with string manipulation
i, p, d = s.partition('.')
'.'.join([i, (d+'0'*n)[:n]])
or the decimal module
str(Decimal(s).quantize(Decimal((0, (1,), -n)), rounding=ROUND_DOWN))
The first step, converting to a string, is quite difficult because there are some pairs of floating point literals (i.e. what you write in the source code) which both produce the same binary representation and yet should be truncated differently. For example, consider 0.3 and 0.29999999999999998. If you write 0.3 in a Python program, the compiler encodes it using the IEEE floating-point format into the sequence of bits (assuming a 64-bit float)
0011111111010011001100110011001100110011001100110011001100110011
This is the closest value to 0.3 that can accurately be represented as an IEEE float. But if you write 0.29999999999999998 in a Python program, the compiler translates it into exactly the same value. In one case, you meant it to be truncated (to one digit) as 0.3, whereas in the other case you meant it to be truncated as 0.2, but Python can only give one answer. This is a fundamental limitation of Python, or indeed any programming language without lazy evaluation. The truncation function only has access to the binary value stored in the computer's memory, not the string you actually typed into the source code.1
If you decode the sequence of bits back into a decimal number, again using the IEEE 64-bit floating-point format, you get
0.2999999999999999888977697537484345957637...
so a naive implementation would come up with 0.2 even though that's probably not what you want. For more on floating-point representation error, see the Python tutorial.
It's very rare to be working with a floating-point value that is so close to a round number and yet is intentionally not equal to that round number. So when truncating, it probably makes sense to choose the "nicest" decimal representation out of all that could correspond to the value in memory. Python 2.7 and up (but not 3.0) includes a sophisticated algorithm to do just that, which we can access through the default string formatting operation.
'{}'.format(f)
The only caveat is that this acts like a g format specification, in the sense that it uses exponential notation (1.23e+4) if the number is large or small enough. So the method has to catch this case and handle it differently. There are a few cases where using an f format specification instead causes a problem, such as trying to truncate 3e-10 to 28 digits of precision (it produces 0.0000000002999999999999999980), and I'm not yet sure how best to handle those.
If you actually are working with floats that are very close to round numbers but intentionally not equal to them (like 0.29999999999999998 or 99.959999999999994), this will produce some false positives, i.e. it'll round numbers that you didn't want rounded. In that case the solution is to specify a fixed precision.
'{0:.{1}f}'.format(f, sys.float_info.dig + n + 2)
The number of digits of precision to use here doesn't really matter, it only needs to be large enough to ensure that any rounding performed in the string conversion doesn't "bump up" the value to its nice decimal representation. I think sys.float_info.dig + n + 2 may be enough in all cases, but if not that 2 might have to be increased, and it doesn't hurt to do so.
In earlier versions of Python (up to 2.6, or 3.0), the floating point number formatting was a lot more crude, and would regularly produce things like
>>> 1.1
1.1000000000000001
If this is your situation, if you do want to use "nice" decimal representations for truncation, all you can do (as far as I know) is pick some number of digits, less than the full precision representable by a float, and round the number to that many digits before truncating it. A typical choice is 12,
'%.12f' % f
but you can adjust this to suit the numbers you're using.
1Well... I lied. Technically, you can instruct Python to re-parse its own source code and extract the part corresponding to the first argument you pass to the truncation function. If that argument is a floating-point literal, you can just cut it off a certain number of places after the decimal point and return that. However this strategy doesn't work if the argument is a variable, which makes it fairly useless. The following is presented for entertainment value only:
def trunc_introspect(f, n):
'''Truncates/pads the float f to n decimal places by looking at the caller's source code'''
current_frame = None
caller_frame = None
s = inspect.stack()
try:
current_frame = s[0]
caller_frame = s[1]
gen = tokenize.tokenize(io.BytesIO(caller_frame[4][caller_frame[5]].encode('utf-8')).readline)
for token_type, token_string, _, _, _ in gen:
if token_type == tokenize.NAME and token_string == current_frame[3]:
next(gen) # left parenthesis
token_type, token_string, _, _, _ = next(gen) # float literal
if token_type == tokenize.NUMBER:
try:
cut_point = token_string.index('.') + n + 1
except ValueError: # no decimal in string
return token_string + '.' + '0' * n
else:
if len(token_string) < cut_point:
token_string += '0' * (cut_point - len(token_string))
return token_string[:cut_point]
else:
raise ValueError('Unable to find floating-point literal (this probably means you called {} with a variable)'.format(current_frame[3]))
break
finally:
del s, current_frame, caller_frame
Generalizing this to handle the case where you pass in a variable seems like a lost cause, since you'd have to trace backwards through the program's execution until you find the floating-point literal which gave the variable its value. If there even is one. Most variables will be initialized from user input or mathematical expressions, in which case the binary representation is all there is.
The result of round is a float, so watch out (example is from Python 2.6):
>>> round(1.923328437452, 3)
1.923
>>> round(1.23456, 3)
1.2350000000000001
You will be better off when using a formatted string:
>>> "%.3f" % 1.923328437452
'1.923'
>>> "%.3f" % 1.23456
'1.235'
n = 1.923328437452
str(n)[:4]
At my Python 2.7 prompt:
>>> int(1.923328437452 * 1000)/1000.0
1.923
The truely pythonic way of doing it is
from decimal import *
with localcontext() as ctx:
ctx.rounding = ROUND_DOWN
print Decimal('1.923328437452').quantize(Decimal('0.001'))
or shorter:
from decimal import Decimal as D, ROUND_DOWN
D('1.923328437452').quantize(D('0.001'), rounding=ROUND_DOWN)
Update
Usually the problem is not in truncating floats itself, but in the improper usage of float numbers before rounding.
For example: int(0.7*3*100)/100 == 2.09.
If you are forced to use floats (say, you're accelerating your code with numba), it's better to use cents as "internal representation" of prices: (70*3 == 210) and multiply/divide the inputs/outputs.
Simple python script -
n = 1.923328437452
n = float(int(n * 1000))
n /=1000
def trunc(num, digits):
sp = str(num).split('.')
return '.'.join([sp[0], sp[1][:digits]])
This should work. It should give you the truncation you are looking for.
So many of the answers given for this question are just completely wrong. They either round up floats (rather than truncate) or do not work for all cases.
This is the top Google result when I search for 'Python truncate float', a concept which is really straightforward, and which deserves better answers. I agree with Hatchkins that using the decimal module is the pythonic way of doing this, so I give here a function which I think answers the question correctly, and which works as expected for all cases.
As a side-note, fractional values, in general, cannot be represented exactly by binary floating point variables (see here for a discussion of this), which is why my function returns a string.
from decimal import Decimal, localcontext, ROUND_DOWN
def truncate(number, places):
if not isinstance(places, int):
raise ValueError("Decimal places must be an integer.")
if places < 1:
raise ValueError("Decimal places must be at least 1.")
# If you want to truncate to 0 decimal places, just do int(number).
with localcontext() as context:
context.rounding = ROUND_DOWN
exponent = Decimal(str(10 ** - places))
return Decimal(str(number)).quantize(exponent).to_eng_string()
>>> from math import floor
>>> floor((1.23658945) * 10**4) / 10**4
1.2365
# divide and multiply by 10**number of desired digits
If you fancy some mathemagic, this works for +ve numbers:
>>> v = 1.923328437452
>>> v - v % 1e-3
1.923
I did something like this:
from math import trunc
def truncate(number, decimals=0):
if decimals < 0:
raise ValueError('truncate received an invalid value of decimals ({})'.format(decimals))
elif decimals == 0:
return trunc(number)
else:
factor = float(10**decimals)
return trunc(number*factor)/factor
You can do:
def truncate(f, n):
return math.floor(f * 10 ** n) / 10 ** n
testing:
>>> f=1.923328437452
>>> [truncate(f, n) for n in range(5)]
[1.0, 1.9, 1.92, 1.923, 1.9233]
Just wanted to mention that the old "make round() with floor()" trick of
round(f) = floor(f+0.5)
can be turned around to make floor() from round()
floor(f) = round(f-0.5)
Although both these rules break around negative numbers, so using it is less than ideal:
def trunc(f, n):
if f > 0:
return "%.*f" % (n, (f - 0.5*10**-n))
elif f == 0:
return "%.*f" % (n, f)
elif f < 0:
return "%.*f" % (n, (f + 0.5*10**-n))
def precision(value, precision):
"""
param: value: takes a float
param: precision: int, number of decimal places
returns a float
"""
x = 10.0**precision
num = int(value * x)/ x
return num
precision(1.923328437452, 3)
1.923
Short and easy variant
def truncate_float(value, digits_after_point=2):
pow_10 = 10 ** digits_after_point
return (float(int(value * pow_10))) / pow_10
>>> truncate_float(1.14333, 2)
>>> 1.14
>>> truncate_float(1.14777, 2)
>>> 1.14
>>> truncate_float(1.14777, 4)
>>> 1.1477
When using a pandas df this worked for me
import math
def truncate(number, digits) -> float:
stepper = 10.0 ** digits
return math.trunc(stepper * number) / stepper
df['trunc'] = df['float_val'].apply(lambda x: truncate(x,1))
df['trunc']=df['trunc'].map('{:.1f}'.format)
int(16.5);
this will give an integer value of 16, i.e. trunc, won't be able to specify decimals, but guess you can do that by
import math;
def trunc(invalue, digits):
return int(invalue*math.pow(10,digits))/math.pow(10,digits);
Here is an easy way:
def truncate(num, res=3):
return (floor(num*pow(10, res)+0.5))/pow(10, res)
for num = 1.923328437452, this outputs 1.923
def trunc(f,n):
return ('%.16f' % f)[:(n-16)]
A general and simple function to use:
def truncate_float(number, length):
"""Truncate float numbers, up to the number specified
in length that must be an integer"""
number = number * pow(10, length)
number = int(number)
number = float(number)
number /= pow(10, length)
return number
There is an easy workaround in python 3. Where to cut I defined with an help variable decPlace to make it easy to adapt.
f = 1.12345
decPlace= 4
f_cut = int(f * 10**decPlace) /10**decPlace
Output:
f = 1.1234
Hope it helps.
Most answers are way too complicated in my opinion, how about this?
digits = 2 # Specify how many digits you want
fnum = '122.485221'
truncated_float = float(fnum[:fnum.find('.') + digits + 1])
>>> 122.48
Simply scanning for the index of '.' and truncate as desired (no rounding).
Convert string to float as final step.
Or in your case if you get a float as input and want a string as output:
fnum = str(122.485221) # convert float to string first
truncated_float = fnum[:fnum.find('.') + digits + 1] # string output
I think a better version would be just to find the index of decimal point . and then to take the string slice accordingly:
def truncate(number, n_digits:int=1)->float:
'''
:param number: real number ℝ
:param n_digits: Maximum number of digits after the decimal point after truncation
:return: truncated floating point number with at least one digit after decimal point
'''
decimalIndex = str(number).find('.')
if decimalIndex == -1:
return float(number)
else:
return float(str(number)[:decimalIndex+n_digits+1])
int(1.923328437452 * 1000) / 1000
>>> 1.923
int(1.9239 * 1000) / 1000
>>> 1.923
By multiplying the number by 1000 (10 ^ 3 for 3 digits) we shift the decimal point 3 places to the right and get 1923.3284374520001. When we convert that to an int the fractional part 3284374520001 will be discarded. Then we undo the shifting of the decimal point again by dividing by 1000 which returns 1.923.
use numpy.round
import numpy as np
precision = 3
floats = [1.123123123, 2.321321321321]
new_float = np.round(floats, precision)
Something simple enough to fit in a list-comprehension, with no libraries or other external dependencies. For Python >=3.6, it's very simple to write with f-strings.
The idea is to let the string-conversion do the rounding to one more place than you need and then chop off the last digit.
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nout+1}f}'[:-1] for x in [2/3, 4/5, 8/9, 9/8, 5/4, 3/2]]
['0.666', '0.800', '0.888', '1.125', '1.250', '1.500']
Of course, there is rounding happening here (namely for the fourth digit), but rounding at some point is unvoidable. In case the transition between truncation and rounding is relevant, here's a slightly better example:
>>> nacc = 6 # desired accuracy (maximum 15!)
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nacc}f}'[:-(nacc-nout)] for x in [2.9999, 2.99999, 2.999999, 2.9999999]]
>>> ['2.999', '2.999', '2.999', '3.000']
Bonus: removing zeros on the right
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nout+1}f}'[:-1].rstrip('0') for x in [2/3, 4/5, 8/9, 9/8, 5/4, 3/2]]
['0.666', '0.8', '0.888', '1.125', '1.25', '1.5']
The core idea given here seems to me to be the best approach for this problem.
Unfortunately, it has received less votes while the later answer that has more votes is not complete (as observed in the comments). Hopefully, the implementation below provides a short and complete solution for truncation.
def trunc(num, digits):
l = str(float(num)).split('.')
digits = min(len(l[1]), digits)
return l[0] + '.' + l[1][:digits]
which should take care of all corner cases found here and here.
Am also a python newbie and after making use of some bits and pieces here, I offer my two cents
print str(int(time.time()))+str(datetime.now().microsecond)[:3]
str(int(time.time())) will take the time epoch as int and convert it to string and join with...
str(datetime.now().microsecond)[:3] which returns the microseconds only, convert to string and truncate to first 3 chars
# value value to be truncated
# n number of values after decimal
value = 0.999782
n = 3
float(int(value*1en))*1e-n
Everybody knows, or at least, every programmer should know, that using the float type could lead to precision errors. However, in some cases, an exact solution would be great and there are cases where comparing using an epsilon value is not enough. Anyway, that's not really the point.
I knew about the Decimal type in Python but never tried to use it. It states that "Decimal numbers can be represented exactly" and I thought that it meant a clever implementation that allows to represent any real number. My first try was:
>>> from decimal import Decimal
>>> d = Decimal(1) / Decimal(3)
>>> d3 = d * Decimal(3)
>>> d3 < Decimal(1)
True
Quite disappointed, I went back to the documentation and kept reading:
The context for arithmetic is an environment specifying precision [...]
OK, so there is actually a precision. And the classic issues can be reproduced:
>>> dd = d * 10**20
>>> dd
Decimal('33333333333333333333.33333333')
>>> for i in range(10000):
... dd += 1 / Decimal(10**10)
>>> dd
Decimal('33333333333333333333.33333333')
So, my question is: is there a way to have a Decimal type with an infinite precision? If not, what's the more elegant way of comparing 2 decimal numbers (e.g. d3 < 1 should return False if the delta is less than the precision).
Currently, when I only do divisions and multiplications, I use the Fraction type:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
Is it the best approach? What could be the other options?
The Decimal class is best for financial type addition, subtraction multiplication, division type problems:
>>> (1.1+2.2-3.3)*10000000000000000000
4440.892098500626 # relevant for government invoices...
>>> import decimal
>>> D=decimal.Decimal
>>> (D('1.1')+D('2.2')-D('3.3'))*10000000000000000000
Decimal('0.0')
The Fraction module works well with the rational number problem domain you describe:
>>> from fractions import Fraction
>>> f = Fraction(1) / Fraction(3)
>>> f
Fraction(1, 3)
>>> f * 3 < 1
False
>>> f * 3 == 1
True
For pure multi precision floating point for scientific work, consider mpmath.
If your problem can be held to the symbolic realm, consider sympy. Here is how you would handle the 1/3 issue:
>>> sympy.sympify('1/3')*3
1
>>> (sympy.sympify('1/3')*3) == 1
True
Sympy uses mpmath for arbitrary precision floating point, includes the ability to handle rational numbers and irrational numbers symbolically.
Consider the pure floating point representation of the irrational value of √2:
>>> math.sqrt(2)
1.4142135623730951
>>> math.sqrt(2)*math.sqrt(2)
2.0000000000000004
>>> math.sqrt(2)*math.sqrt(2)==2
False
Compare to sympy:
>>> sympy.sqrt(2)
sqrt(2) # treated symbolically
>>> sympy.sqrt(2)*sympy.sqrt(2)==2
True
You can also reduce values:
>>> import sympy
>>> sympy.sqrt(8)
2*sqrt(2) # √8 == √(4 x 2) == 2*√2...
However, you can see issues with Sympy similar to straight floating point if not careful:
>>> 1.1+2.2-3.3
4.440892098500626e-16
>>> sympy.sympify('1.1+2.2-3.3')
4.44089209850063e-16 # :-(
This is better done with Decimal:
>>> D('1.1')+D('2.2')-D('3.3')
Decimal('0.0')
Or using Fractions or Sympy and keeping values such as 1.1 as ratios:
>>> sympy.sympify('11/10+22/10-33/10')==0
True
>>> Fraction('1.1')+Fraction('2.2')-Fraction('3.3')==0
True
Or use Rational in sympy:
>>> frac=sympy.Rational
>>> frac('1.1')+frac('2.2')-frac('3.3')==0
True
>>> frac('1/3')*3
1
You can play with sympy live.
So, my question is: is there a way to have a Decimal type with an infinite precision?
No, since storing an irrational number would require infinite memory.
Where Decimal is useful is representing things like monetary amounts, where the values need to be exact and the precision is known a priori.
From the question, it is not entirely clear that Decimal is more appropriate for your use case than float.
is there a way to have a Decimal type with an infinite precision?
No; for any non-empty interval on the real line, you cannot represent all the numbers in the set with infinite precision using a finite number of bits. This is why Fraction is useful, as it stores the numerator and denominator as integers, which can be represented precisely:
>>> Fraction("1.25")
Fraction(5, 4)
If you are new to Decimal, this post is relevant: Python floating point arbitrary precision available?
The essential idea from the answers and comments is that for computationally tough problems where precision is needed, you should use the mpmath module https://code.google.com/p/mpmath/. An important observation is that,
The problem with using Decimal numbers is that you can't do much in the way of math functions on Decimal objects
Just to point out something that might not be immediately obvious to everyone:
The documentation for the decimal module says
... The exactness carries over into arithmetic. In decimal floating point, 0.1 + 0.1 + 0.1 - 0.3 is exactly equal to zero.
(Also see the classic: Is floating point math broken?)
However, if we use decimal.Decimal naively, we get the same "unexpected" result
>>> Decimal(0.1) + Decimal(0.1) + Decimal(0.1) == Decimal(0.3)
False
The problem in the naive example above is the use of float arguments, which are "losslessly converted to [their] exact decimal equivalent," as explained in the docs.
The trick (implicit in the accepted answer) is to construct the Decimal instances using e.g. strings, instead of floats
>>> Decimal('0.1') + Decimal('0.1') + Decimal('0.1') == Decimal('0.3')
True
or, perhaps more convenient in some cases, using tuples (<sign>, <digits>, <exponent>)
>>> Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) + Decimal((0, (1,), -1)) == Decimal((0, (3,), -1))
True
Note: this does not answer the original question, but it is closely related, and may be of help to people who end up here based on the question title.
I've seen a few questions about this already, but none that I read helped me actually understand why what I am trying to do is failing.
So I have a bunch of floating point values, and they have different precisions. Some are 0.1 others are 1.759374, etc. And I want to format them so they are ALL in the form of "+0.0000000E+00" I tried doing
number = '%1.7f' % oldnumber
but that didn't work. I thought what I was telling it to do was "one digit perfor the decimal point, and 7 after, float" but it doesn't work. I'm not really getting the examples in the docs, which don't seem to even bother with "before and after decimal point" issues, and I didn't find a question that was about before and after decimal point fixing.
Now, I know that some of my numbers are 0.0437 or similar, and I want them to appear as 4.3700000E-02 or something. I was sort of hoping it would do the E bit on it's own, but if it doesn't how do I do it?
Here is the exact line I have:
RealValConv = '%1.7g' % struct.unpack('!f', RealVal.decode('hex'))[0]
RealVal is a hex number that represents the value I want.
Also, this is in Python 2.7
>>> '{:.7e}'.format(0.00000000000000365913456789)
'3.6591346e-15'
You can use the scientific notation format: Something like this:
number = '%e' % oldnumber
>>> x = 1.759374
>>> print '%e' % x
1.759374e+00
>>>
>>> x = 1.79
>>> print '%e' % x
1.790000e+00
>>>
>>> x = 1.798775655
>>> print '%e' % x
1.798776e+00
>>>
Or, if you want to control precision, you can use the format method as sugged by #leon approach (+1).
>>> x = 1.759374
>>>
>>> print('{:.2e}'.format(x))
1.76e+00
>>>
>>> print('{:.10e}'.format(x))
1.7593740000e+00
>>>
>>> print('{:.4e}'.format(x))
1.7594e+00
>>>
Is there a way to get the ceil of a high precision Decimal in python?
>>> import decimal;
>>> decimal.Decimal(800000000000000000001)/100000000000000000000
Decimal('8.00000000000000000001')
>>> math.ceil(decimal.Decimal(800000000000000000001)/100000000000000000000)
8.0
math rounds the value and returns non precise value
The most direct way to take the ceiling of a Decimal instance x is to use x.to_integral_exact(rounding=ROUND_CEILING). There's no need to mess with the context here. Note that this sets the Inexact and Rounded flags where appropriate; if you don't want the flags touched, use x.to_integral_value(rounding=ROUND_CEILING) instead. Example:
>>> from decimal import Decimal, ROUND_CEILING
>>> x = Decimal('-123.456')
>>> x.to_integral_exact(rounding=ROUND_CEILING)
Decimal('-123')
Unlike most of the Decimal methods, the to_integral_exact and to_integral_value methods aren't affected by the precision of the current context, so you don't have to worry about changing precision:
>>> from decimal import getcontext
>>> getcontext().prec = 2
>>> x.to_integral_exact(rounding=ROUND_CEILING)
Decimal('-123')
By the way, in Python 3.x, math.ceil works exactly as you want it to, except that it returns an int rather than a Decimal instance. That works because math.ceil is overloadable for custom types in Python 3. In Python 2, math.ceil simply converts the Decimal instance to a float first, potentially losing information in the process, so you can end up with incorrect results.
x = decimal.Decimal('8.00000000000000000000001')
with decimal.localcontext() as ctx:
ctx.prec=100000000000000000
ctx.rounding=decimal.ROUND_CEILING
y = x.to_integral_exact()
You can do this using the precision and rounding mode option of the Context constructor.
ctx = decimal.Context(prec=1, rounding=decimal.ROUND_CEILING)
ctx.divide(decimal.Decimal(800000000000000000001), decimal.Decimal(100000000000000000000))
EDIT: You should consider changing the accepted answer.. Although the prec can be increased as needed, to_integral_exact is a simpler solution.
>>> decimal.Context(rounding=decimal.ROUND_CEILING).quantize(
... decimal.Decimal(800000000000000000001)/100000000000000000000, 0)
Decimal('9')
def decimal_ceil(x):
int_x = int(x)
if x - int_x == 0:
return int_x
return int_x + 1
Just use potency to make this.
import math
def lo_ceil(num, potency=0): # Use 0 for multiples of 1, 1 for multiples of 10, 2 for 100 ...
n = num / (10.0 ** potency)
c = math.ceil(n)
return c * (10.0 ** potency)
lo_ceil(8.0000001, 1) # return 10