This question already has answers here:
Integer division in Python 2 and Python 3
(4 answers)
Closed 3 years ago.
I have defined a function in Python 2.7.15 which takes two inputs (b,v), but I have noticed that f(50,0.1) produces very different results to those of f(50.0,0.1). This is the function:
def f(b,v):
h=b*v/math.sqrt(1-v**2)
def dJ_dp(J,p):
return [J[1],-J[0]+3.0/2*J[0]**2+1/(2*h**2)]
J0 = [0.0000001,1/b]
ps = np.linspace(0,15,50)
Js = odeint(dJ_dp, J0, ps)
us = Js[:,0]
return (ps,1/us)
I've needed to define dJ_dp inside f(b,v) because it needs the value h. Why are the outputs so different? Why are they different at all?
My hypotheses was that something went wrong when defining h but that doesn't seem to be the case.
The problem is probably here: J0 = [0.0000001,1/b].
If b is the int 50, 1/b will be done using integer division, and result in 0. If b is the floating point 50.0 it will be done with floating point division and will result in 0.02.
You could use 1.0 instead of 1 to force floating point arithmetic:
J0 = [0.0000001, 1.0/b]
# Here -----------^
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Is floating point arbitrary precision available?
(5 answers)
Closed 3 years ago.
I have two numbers a and b:
a = 1562239482.739072
b = 1562239482.739071
If I perform a-b in python, I get 1.1920928955078125e-06. However, I want 0.000001, which is the right answer after subtraction.
Any help would be appreciated. Thank you in advance.
t = float(1562239482.739071)
T = float(1562239482.739072)
D = float(T - t)
print(float(D))
OR
t = 1562239482.739071
T = 1562239482.739072
D = T - t
print (D)
I get the same answer 1.1920928955078125e-06 using both as mentioned above. However, I want the result 0.000001.
Expected Result: 0.000001
Result : 1.1920928955078125e-06
This is common problem with floating point arithmetic. Use the decimal module
you can use Decimal like
from decimal import Decimal
a=Decimal('1562239482.739071')
b=Decimal('1562239482.739072')
c= b - a
print(c)
That will be the answer you want
This question already has answers here:
How can I force division to be floating point? Division keeps rounding down to 0?
(11 answers)
Closed 5 years ago.
I have the following program
def F_inf(a,b):
x1=a.numerator/a.denominator
x2=b.numerator/b.denominator
if x1<x2:
print "a<b"
elif x1>x2:
print "a>b"
else: print "a=b"
a=Fraction(10,4)
b=Fraction(10,4)
F_inf(a, b)
When I execute it,x1 receive just the integer value of the fraction, for exemple if I have to compute 2/4 x1 is equal to 0 not 0.5.
What should I do ?
Thanks
It sounds like you're using Python2. The best solution would be to switch to Python 3 (not just because of the division but because "Python 2.x is legacy, Python 3.x is the present and future of the language").
Other than that you have a couple of choices.
from __future__ import division
# include ^ as the first line in your file to use float division by default
or
a = 1
b = 2
c = a / (1.0*b) # multiplying by 1.0 forces the right side of the division to be a float
#c == 0.5 here
This question already has answers here:
Is floating point math broken?
(31 answers)
Limiting floats to two decimal points
(35 answers)
Closed 5 years ago.
I have this function
a = 2
b = 6
c = .4
def f(n):
if n == 1:
return a
if n == 2:
return b
else:
return c*f(n-1) + (1-c)*f(n-2)
It's a simple recursive function with two base cases. f(1) yields 2, f(2) yields 6, which is good, but f(3) yields 3.6000000000000005, when it should be just 3.6. I don't understand where these extra digits are coming from. What might be causing this?
Welcome to the magic of floating point math. You can find a nice answer here: Is floating point math broken?
Spoiler: no, just difficult.
Using floating point numbers in computers makes it necessary to round numbers because there are infinite real numbers between 0.0 and 1.0 for example. Thus the computer would need infinite memory to represent them. Of course it does not do that but rather uses a discrete number of floating point numbers that can be represented exactly. If the result of a computation happens to not be representable with any of these finite floating point numbers, the result is not exact. This is what happens to you here.
This question already has answers here:
Negative integer division surprising result
(5 answers)
Closed 7 years ago.
How python calculate this division?
>>>-3/10
-1
Looks like python rounds the answer to the lower value.
>>> -3/4
-1
>>> -3/4.
-0.75
>>> -3/10.
-0.3
>>> -3/10
-1
This is just my guess.
Python 2, like many languages, uses integer division. Dividing two integers returns a integer (the nearest integer to the answer rounded down.)
To get a floating point result, you need to force one or more of the terms to be a float.
float(-3)/10
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 7 years ago.
I have the following piece of code which I run with python 2.7.9:
import math
print 0.6/0.05
print math.floor(0.6/0.05)
print int(0.6/0.05)
The output is:
12.0
11.0
11
Why is it turning my 12.0 to an 11.0 when I use floor or int? round() will work but does not suit my use case.
I have this running with 0.55,0.60,0.65,0.70... and everything works fine except for 0.6.
Any idea?
If you know the required resolution in advance, you can multiply your input numbers accordingly and then make the required calculations safely.
Let's say that the resolution is 0.01. Then:
# original inputs
n = 0.6
d = 0.05
# intermediate values
n1 = int(n*100)
d1 = int(d*100)
# integer calculation
r = n1 // d1
# or
# floating point calculation
r = float(n1) / float(d1)