It's quite a simple question actually, let's say i have this number 1.499998499999999e-98, now if i wanted to round it up to ~1.5e-98 how would i go about it? I tried the round() but it gives me 0.0 which is kind of useless for what i'm working on.
One (kinda 'hacky') way would be to format the number as a string and convert back to float:
>>> x = 1.499998499999999e-98
>>> "%.2e" % x
'1.50e-98'
>>> float("%.2e" % x)
1.5e-98
Other than using round, this will round to significant digits, not "absolute" number of digits.
You can specify the number of digits that you want to round to (see the documentation), and also read the note about rounding being surprising.
Numpy may give you a little more control over rounding.
Related
Basically, I have a list of float numbers with too many decimals. So when I created a second list with two decimals, Python rounded them. I used the following:
g1= ["%.2f" % i for i in g]
Where g1 is the new list with two decimals, but rounded, and g is the list with float numbers.
How can I make one without rounding them?
I'm a newbie, btw. Thanks!
So, you want to truncate the numbers at the second digit?
Beware that rounding might be the better and more accurate solution anyway.
If you want to truncate the numbers, there are a couple of ways - one of them is to multiply the number by 10 elevated to the number of desired decimal places (100 for 2 places), apply "math.floor", and divide the total back by the same number.
However, as internal floating point arithmetic is not base 10, you'd risk getting more decimal places on the division to scale down.
Another way is to create a string with 3 digits after the "." and drop the last one - that'd be rounding proof.
And again, keep in mind that this converts the numbers to strings - what should be done for presentation purposes only. Also, "%" formatting is quite an old way to format parameters in a string. In modern Python, f-strings are the preferred way:
g1 = [f"{number:.03f}"[:-1] for number in g]
Another, more correct way, is, of course, treat numbers as numbers, and not play tricks on adding or removing digits on it. As noted in the comments, the method above would work for numbers like "1.227", that would be kept as "1.22", but not for "2.99999", which would be rounded to "3.000" and then truncated to "3.00".
Python has the decimal modules, which allows for arbitrary precision of decimal numbers - which includes less precision, if needed, and control of the way Python does the rounding - including rounding towards zero, instead of the nearest number.
Just set the decimal context to the decimal.ROUND_DOWN strategy, and then convert your numbers using either the round built-in (the exact number of digits is guaranteed, unlike using round with floating point numbers), or just do the rounding as part of the string formatting anyway. You can also convert your floats do Decimals in the same step:
from decimals import Decimal as D, getcontext, ROUND_DOWN
getcontext().rounding = ROUND_DOWN
g1 = [f"{D(number):.02f}" for number in g]
Again - by doing this, you could as well keep your numbers as Decimal objects, and still be able to perform math operations on them:
g2 = [round(D(number, 2)) for number in g]
Here is my solution where we don't even need to convert the number's to string to get the desired output:
def format_till_2_decimal(num):
return int(num*100)/100.0
g = [-5.427926, -12.222018, 7.214379, -16.771845, -6.1441464, 10.1383295, 14.740516, 5.9209185, -9.740783, -10.098338]
formatted_g = [format_till_2_decimal(num) for num in g]
print(formatted_g)
Hope this solution helps!!
Here might be the answer you are looking for:
g = [-5.427926, -12.222018, 7.214379, -16.771845, -6.1441464, 10.1383295, 14.740516, 5.9209185, -9.740783, -10.098338]
def trunc(number, ndigits=2):
parts = str(number).split('.') # divides number into 2 parts. for ex: -5, and 4427926
truncated_number = '.'.join([parts[0], parts[1][:ndigits]]) # We keep this first part, while taking only 2 digits from the second part. Then we concat it together to get '-5.44'
return round(float(truncated_number), 2) # This should return a float number, but to make sure it is roundded to 2 decimals.
g1 = [trunc(i) for i in g]
print(g1)
[-5.42, -12.22, 7.21, -16.77, -6.14, 10.13, 14.74, 5.92, -9.74, -10.09]
Hope this helps.
Actually if David's answer is what you are looking for, it can be done simply as following:
g = [-5.427926, -12.222018, 7.214379, -16.771845, -6.1441464, 10.1383295, 14.740516, 5.9209185, -9.740783, -10.098338]
g1 = [("%.3f" % i)[:-1] for i in g]
Just take 3 decimals, and remove the last chars from the result strings. (You may convert the result to float if you like)
We all know that if we want to find the last number of 1182973981273983 which is 3 we simply do this:
>>> print(1182973981273983 % 10)
3
But if I want to get the last number of 2387123.23 I was thinking of doing this:
>>> 2387123.23 % 10 ** (-1 * 2)
0.009999999931681564
But it doesn't work. What is the mathematical way of getting the last number a decimal number.
p.s. String solutions are invalid. We are programmers we need to know how math works.
As people have already pointed out in the comments, this is not possible for floating point numbers; the concept of 'last decimal' simply doesn't apply. See links in comments for more details.
On the other hand, this would work if you were using fixed point arithmetic (i.e. the Decimal type). Then a solution might look like this:
>>> import decimal
>>> d = decimal.Decimal('3.14')
>>> d.as_tuple().digits[-1]
4
I would like to generate uniformly distributed random numbers between 0 and 0.5, but truncated to 2 decimal places.
without the truncation, I know this is done by
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,1))*0.5
could anyone help me with suggestions on how to generate random numbers up to 2 d.p. only? Thanks!
A float cannot be truncated (or rounded) to 2 decimal digits, because there are many values with 2 decimal digits that just cannot be represented exactly as an IEEE double.
If you really want what you say you want, you need to use a type with exact precision, like Decimal.
Of course there are downsides to doing that—the most obvious one for numpy users being that you will have to use dtype=object, with all of the compactness and performance implications.
But it's the only way to actually do what you asked for.
Most likely, what you actually want to do is either Joran Beasley's answer (leave them untruncated, and just round at print-out time) or something similar to Lauritz V. Thaulow's answer (get the closest approximation you can, then use explicit epsilon checks everywhere).
Alternatively, you can do implicitly fixed-point arithmetic, as David Heffernan suggests in a comment: Generate random integers between 0 and 50, keep them as integers within numpy, and just format them as fixed point decimals and/or convert to Decimal when necessary (e.g., for printing results). This gives you all of the advantages of Decimal without the costs… although it does open an obvious window to create new bugs by forgetting to shift 2 places somewhere.
decimals are not truncated to 2 decimal places ever ... however their string representation maybe
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,1))*0.5
print ["%0.2d"%val for val in set]
How about this?
np.random.randint(0, 50, size=(50,1)).astype("float") / 100
That is, create random integers between 0 and 50, and divide by 100.
EDIT:
As made clear in the comments, this will not give you exact two-digit decimals to work with, due to the nature of float representations in memory. It may look like you have the exact float 0.1 in your array, but it definitely isn't exactly 0.1. But it is very very close, and you can get it closer by using a "double" datatype instead.
You can postpone this problem by just keeping the numbers as integers, and remember that they're to be divided by 100 when you use them.
hundreds = random.randint(0, 50, size=(50, 1))
Then at least the roundoff won't happen until at the last minute (or maybe not at all, if the numerator of the equation is a multiple of the denominator).
I managed to find another alternative:
import numpy as np
rs = np.random.RandomState(123456)
set = rs.uniform(size=(50,2))
for i in range(50):
for j in range(2):
set[i,j] = round(set[i,j],2)
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
python limiting floats to two decimal points
i want to set 39.54484700000000 to 39.54 using python ,
how to get it ,
thanks
If you want to change the actual value, use round as Eli suggested. However for many values and certain versions of Python this will not result be represented as the string "39.54". If you want to just round it to produce a string to display to the user, you can do
>>> print "%.2f" % (39.54484700000000)
39.54
or in newer versions of Python
>>> print("{:.2f}".format(39.54484700000000))
39.54
or with the fstrings
>>> print(f'{39.54484700000000:.2f}')
39.54
Relevant Documentation: String Formatting Operations, Built-in Functions: round
How about round
>>> import decimal
>>> d=decimal.Decimal("39.54484700000000")
>>> round(d,2)
39.54
You can use the quantize method if you're using a Decimal:
In [24]: q = Decimal('0.00')
In [25]: d = Decimal("115.79341800000000")
In [26]: d.quantize(q)
Out[26]: Decimal("115.79")
>>> round(39.54484700000000, 2)
39.54
Note, however, that the result isn't actually 39.54, but 39.53999999999999914734871708787977695465087890625.
Use round:
Return the floating point value x
rounded to n digits after the decimal
point. If n is omitted, it defaults to
zero. The result is a floating point
number.
Values are rounded to the closest
multiple of 10 to the power minus n;
if two multiples are equally close,
rounding is done away from 0
>>> round(39.544847, 2)
39.539999999999999
>>>
Note that since 39.54 isn't exactly represantable with floating points on my PC (x86), the result is an epsilon off. But that makes no difference (and is a whole different issue with many SO questions and answers on it). If you convert it to a string properly, you'll see what you expect:
>>> "%.2f" % round(39.544847, 2)
'39.54'
Eli mentions using the round function -- depending on your requirements, you may want to return a Decimal object instead.
>>> from decimal import Decimal
>>> float_val = 39.54484700000000
>>> decimal_val = Decimal("%.2f" % float_val)
>>> print decimal_val
39.54
Using Decimal objects lets you specify the exact number of decimal places that you want to keep track of, so you avoid ending up with a floating point number that is represented as 39.539999999999999. Specifically, if you are doing financial calculations, you will almost always be advised to stay away from floating-point numbers.
You can't cast floats directly into Decimals, however (the floats are imprecise, and Python can't guess how you want them rounded,) so I will almost always convert them to a rounded string representation first (that's the "%.2f" % float_val -- %.2f means to display only two decimals, and then create a Decimal out of that.
I have two floats in Python that I'd like to subtract, i.e.
v1 = float(value1)
v2 = float(value2)
diff = v1 - v2
I want "diff" to be computed up to two decimal places, that is compute it using %.2f of v1 and %.2f of v2. How can I do this? I know how to print v1 and v2 up to two decimals, but not how to do arithmetic like that.
The particular issue I am trying to avoid is this. Suppose that:
v1 = 0.982769777778
v2 = 0.985980444444
diff = v1 - v2
and then I print to file the following:
myfile.write("%.2f\t%.2f\t%.2f\n" %(v1, v2, diff))
then I will get the output: 0.98 0.99 0.00, suggesting that there's no difference between v1 and v2, even though the printed result suggests there's a 0.01 difference. How can I get around this?
thanks.
You said in a comment that you don't want to use decimal, but it sounds like that's what you really should use here. Note that it isn't an "extra library", in that it is provided by default with Python since v2.4, you just need to import decimal. When you want to display the values you can use Decimal.quantize to round the numbers to 2 decimal places for display purposes, and then take the difference of the resulting decimals.
>>> v1 = 0.982769777778
>>> v2 = 0.985980444444
>>> from decimal import Decimal
>>> d1 = Decimal(str(v1)).quantize(Decimal('0.01'))
>>> d2 = Decimal(str(v2)).quantize(Decimal('0.01'))
>>> diff = d2 - d1
>>> print d1, d2, diff
0.98 0.99 0.01
I find round is a good alternative.
a = 2.000006
b = 7.45001
c = b - a
print(c) #This gives 5.450004
print(round(c, 2)) ##This gives 5.45
I've used poor man's fixed point in the past. Essentially, use ints, multiply all of your numbers by 100 and then divide them by 100 before you print.
There was a good post on similar issues on Slashdot recently.
The thing about the float type is that you can't really control what precision calculations are done with. The float type is letting the hardware do the calculations, and that typically does them the way it's most efficient. Because floats are (like most machine-optimized things) binary, and not decimal, it's not straightforward (or efficient) to force these calculations to be of a particular precision in decimal. For a fairly on-point explanation, see the last chapter of the Python tutorial. The best you can do with floats is round the result of calculations (preferably just when formatting them.)
For controlling the actual calculations and precision (in decimal notation, no less), you should consider using Decimals from the decimal module instead. It gives you much more control over pretty much everything, although it does so at a speed cost.
What do you mean "two significant figures"? Python float has about 15 sigfigs, but it does have representation errors and roundoff errors (as does decimal.Decimal.) http://docs.python.org/tutorial/floatingpoint.html might prove an interesting read for you, along with the great amount of resources out there about floating point numbers.
float usually has the right kind of precision for representing real numbers, such as physical measurements: weights, distances, durations, temperatures, etc. If you want to print out a certain way of displaying floats, use the string formatting as you suggest.
If you want to represent fixed-precision things exactly, you probably want to use ints. You'll have to keep track of the decimal place yourself, but this is often not too tough.
decimal.Decimal is recommended way too much. It can be tuned to specific, higher-than-float precision, but this is very seldom needed; I don't think I've ever seen a problem where this was the reason someone should use decimal.Decimal. decimal.Decimal lets you represent decimal numbers exactly and control how rounding works, which makes it suitable for representing money in some contexts.
You could format it using a different format string. Try '%.2g' or '%.2e'. Any decent C/C++ reference will describe the different specifiers. %.2e formats the value to three significant digits in exponential notation - the 2 means two digits following the decimal point and one digit preceding it. %.2g will result in either %.2f or %.2e depending on which will yield two significant digits in the minimal amount of space.
>>> v1 = 0.982769777778
>>> v2 = 0.985980444444
>>> print '%.2f' %v1
0.98
>>> print '%.2g' %v1
0.98
>>> print '%.2e' %v1
9.83e-01
>>> print '%.2g' %(v2-v1)
0.0032
>>> print '%.2e' %(v2-v1)
3.21e-03