Python is exhibiting a strange behaviour that I didnt witness before, not sure what I did or what happened but basically it doesnt operate in shell with decimals
if i type simple
>>> 2/3
0
>>> 3/2
1
if i try to format that through % or format() it doesnt do much either, basically it just doesnt recognize any decimal
>>> a =2/3
>>> a
0
>>> format(a, '.5f')
'0.00000'
I needed a simple division for my code to check something and all of a sudden I encountered something as bizzare as this
I use Python 2.7
In Python 2, / performs "integer division" by default. If you put
from __future__ import division
at the top of your script, it will do the division you want, which will be the default behavior in Python 3. Alternatively, if you want to stay compatible with old Python versions (not recommended for new code), do
2. / 3.
or, with variables
x / float(y)
a = 2/3.
or
a = 2./3
At least one number needs to be a float!
You are performing operations exclusively on integers, which means fractional components of numbers are dropped. You need something like 2.0/3 instead, so floating point arithmetic will be used.
Related
When I type regular division, it defaults to floor. Why is it doing this? How do I change it? Will I have to change it every time?
ex:
>>>97/20
4
>>>97//20
4
That's because both numbers are integers, and in Python 2 it works that way: dividing two integers uses floor division. You could see a difference if you did this:
>>>97.0/20.0
4.85
>>>97.0//20.0
4.0
It seems like you are using python 2.x, this is not the problem with python 3.x
If you want to get precise result use 97/20.0.
Adding 20.0 will do the type casting of the result to float
You’re using Python2. The behaviour when dividing integers changed to float division by default in Python3. So if you want float division by default for integers use Python3 or place
from __future__ import division
at the top of your code to use that feature.
In fact, there are many reasons why it’s probably better for you start using Python3 right away.
I have an app that has some complex mathematical operations leading to very small amount in results like 0.000028 etc. Now If I perform some calculation in javascript like (29631/1073741824) it gives me result as 0.000027596019208431244 whereas the same calculation in python with data type float gives me 0.0. How can I increase the number of decimal points in results in python
The problem is not in a lack of decimal points, it's that in Python 2 integer division produces an integer result. This means that 29631/1073741824 yields 0 rather than the float you are expecting. To work around this, you can use a float for either operand:
>>> 29631.0/1073741824
2.7596019208431244e-05
This changed in Python 3, where the division operator does the expected thing. You can use a from __future__ import to turn on the new behavior in Python 2:
>>> from __future__ import division
>>> 29631/1073741824
2.7596019208431244e-05
If you're using Python 3.x, you should try this :
print(str(float(29631)/1073741824))
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Floating Point Limitations
Using Python 2.7 here.
Can someone explain why this happens in the shell?
>>> 5.2-5.0
0.20000000000000018
Searching yielded things about different scales of numbers not producing the right results (a very small number and a very large number), but that seemed pretty general, and considering the numbers I'm using are of the same scale, I don't think that's why this happens.
EDIT: I suppose I didn't define that the "this thing happening" I meant was that it returns 0.2 ... 018 instead of simply resulting in 0.2. I get that print rounds, and removed the print part in the code snippet, as that was misleading.
You need to understand that 5.2-5.0 really is 0.20000000000000018, not 0.2. The standard explanation for this is found in What Every Computer Scientist Should Know About Floating-Point Arithmetic.
If you don't want to read all of that, just accept that 5.2, 5.0, and 0.20000000000000018 are all just approximations, as close as the computer can get to the numbers you really way.
Python has some tricks to allow you to not know what every computer scientist should know and still get away with it. The main trick is that str(f)—that is, the human-readable rendition of a floating-point number—is truncated to 12 significant digits, so str(5.2-5.0) is "0.2", not "0.20000000000000018". But sometimes you need all the precision you can get, so repr(f)—that is, the machine-readable rendition—is not truncated, so repr(5.2-5.0) is "0.20000000000000018".
Now the only thing left to understand is what the interpreter shell does. As Ashwini Chaudhary explains, just evaluating something in the shell prints out its repr, while the print statement prints out its str.
shell uses repr():
In [1]: print repr(5.2-5.0)
0.20000000000000018
In [2]: print str(5.2-5.0)
0.2
In [3]: print 5.2-5.0
0.2
The default implementation of float.__str__ limits the output to 12 digits only.
Thus, the least significant digits are dropped and what is left is the value 0.2.
To print more digits (if available), use string formatting:
print '%f' % result # prints 0.200000
That defaults to 6 digits, but you can specify more precision:
print '%.16f' % result # prints 0.2000000000000002
Alternatively, python offers a newer string formatting method too:
print '{0:.16f}'.format(result) # prints 0.2000000000000002
Why python produces the 'imprecise' result in the first place has everything to do with the imprecise nature of floating point arithmetic. Use the decimal module instead if you need more predictable precision:
>>> from decimal import *
>>> getcontext().prec = 1
>>> Decimal(5.2) - Decimal(5.0)
Decimal('0.2')
Python has two different ways of converting an object to a string, the __str__ and __repr__ methods. __str__ is meant to be a normal string output and is used by print; __repr__ is meant to be a more exact representation and is what is displayed when you don't use print, or when you print the contents of a list or dictionary. __str__ rounds floating-point values.
As for why the actual result of the subtraction is 0.20000000000000018 rather than 0.2 exactly, it has to do with the internal representation of floating point. It's impossible to represent 5.2 exactly because it's an infinitely repeating binary number. The closest that you can come is approximately 5.20000000000000018.
I'm a bit new to python and can't seem to figure out what I'm doing wrong.
a = 9
b = 13
print ((a-b)/a)
-1
But on my calculator, the correct answer is -0.444444444 (meaning 'a' is about 45% lower than 'b').
How can I get a few decimals to show up?
I tried
print Decimal((a-b)/a)
print float((a-b)/a)
both with the same result. it works if I make a = 9.0 but I wanted to see if there was anything I can do without changing the variables.
I'm sure this is super easy but I'm not sure what to try. Any suggestions?
Thanks
Try converting one (or both) of the arguments to a float, rather than the result:
print ((a-b)/float(a))
Or just upgrade to Python 3 where this behaviour has been fixed:
>>> a = 9
>>> b = 13
>>> print ((a-b)/a)
-0.4444444444444444
By the way, if you want integer division in Python 3.x you can use // instead of /. See the PEP about this change if you are interested.
You need to specify that you want to operate on floating point numbers.
For example:
3.0/4.0 or just 3.0/4 will give you floating point. Right now it's just performing integer operations.
EDIT: you could use float(3)/4 too
You are performing division on two integer operands and in Python 2.x this means integer division. That is, the result is an integer.
You need what is known as floating point division. To force this you just need at least one of the operands to the division to be a float. For example:
print ((a-b)/float(a))
or
print (float(a-b)/a)
I have come across a very strange issue in python. (Using python 2.4.x)
In windows:
>>> a = 2292.5
>>> print '%.0f' % a
2293
But in Solaris:
>>> a = 2292.5
>>> print '%.0f' % a
2292
But this is the same in both windows and solaris:
>>> a = 1.5
>>> print '%.0f' % a
2
Can someone explain this behavior? I'm guessing it's platform dependent on the way that python was compiled?
The function ultimately in charge of performing that formatting is PyOS_snprintf
(see the sources). As you surmise, that's unfortunately system-dependent, i.e., it relies on vsprintf, vsnprintf or other similar functions that are ultimately supplied by the platform's C runtime library (I don't recall if the C standard says anything about the '%f' formatting for floats that are "exactly midway" between two possible rounded values... but, whether the C standard is lax about this, or rather the C standard is strict but some C runtimes break it, ultimately is a pretty academic issue...).
round() rounds toward the nearest even integer
"%n.nf" works the same way as round()
int() truncates towards zero
"rounding a positive number to the nearest integer
can be implemented by adding 0.5 and truncating"
-- http://en.wikipedia.org/wiki/Rounding
In Python you can do this with: math.trunc( n + 0.5 )
assuming n is positive of course...
Where "round half to even" is not appropriate, i now use
math.trunc( n + 0.5 ) where i used to use int(round(n))
I is plataform dependent. You can find the documentation here.
It is good to user ceil or floor when you know what you want (to round up or down).