Divison by zero due to insufficient precision using exp() Python - python

I'm using a bissection algorithm to find the maximum of a function.
Here is my code:
from math import exp
h = 6.62606876e-34
c = 2.99792458e8
k = 1.3806504e-23
T = 7000
epsilon = 1.e-10
def fct(lam) :
return 2*h**2*c**3*exp(h*c/(lam*k*T))/(k*T*lam**7*(exp(h*c/(lam*k*T)) - 1)) - 10*c**2*h/(lam**6*(exp(h*c/(lam*k*T)) - 1))
lam1 = 100.e-9
lam2 = 1000.e-9
delta = lam2 - lam1
f1 = fct(lam1)
f2 = fct(lam2)
while delta > epsilon :
lamm = 0.5*(lam1 + lam2)
fm = fct(lamm)
f2=fct(lam2)
if fm*f2 > 0 :
lam2 = lamm
else :
lam1 = lamm
delta = lam2 - lam1
print('racine de la fonction: {}+/-{}'.format(lamm,delta))
The problem is that because of the finite precision of floats, I get a division by zero error when the line fm = fct(lamm) is evaluated.
How can I fix this issue?

I think your best bet for doing math in python is using one of the many math processing libraries. https://stackoverflow.com/a/13442742/2534876 suggests mpmath for your floating point issues. bigfloat could possibly be sufficient. numpy also has stuff like float128 for this purpose.
Dig around and find something you like.

I don't have a great answer. One technique would be to recale your units, say set c=1 if you're dealing with large quantities, do your computation then express your answer back in the unscaled system. You'd have to do something analogous if you're dealing with small units.
Another solution which I've just discovered is the mpmath module, which claims to be able to do computations up to 1000 units of precision! Their website is http://mpmath.org, I hope this works out for you.

One possibility is to transform your code into using exact arithmetics for all but the transcendental functions (exp in your example), for example using https://docs.python.org/2/library/fractions.html. Certainly, inputs and outputs of transcendental functions need to be converted.

Related

Math.tan for small angles. Does python use small angle approximation?

I am trying to compute math.tan(0.000000001) and I am getting 0.00000001
>>> math.tan(0.00000001) == 0.00000001
True
Is this due to how math.tan is implemented? Does it use small-angle approximation?
Where can I get more documentation about this
One way to go would be, by analogy with numpy.expm1, to implement a function that computes tan(x)-x in double precision.
While a production quality version of that night be tricky, here is a simple version, that should give accurate answers for |x| < 1e-6
tan(x)-x = sin(x)/cos(x) - x = (sin(x)-x*cos(x))/cos(x)
for such small x we can write, to better than double precision
sin(x) = x - x*x*x/6 + x*x*x*x*x/120
cos(x) = 1 - x*x/2 + x*x*x*x/24
Substituting these we get
tan(x)-x = x*x*x*(1.0/3 - (1.0/30)*x*x)/cos(x)
There's nothing special about this. Python's float only has limited precision, which we can explore with numpy:
0.000000010000000000000000209226 # np.tan(0.00000001)
0.000000009999999999999998554864 # np.nextafter(np.tan(0.00000001), -1)
0.000000010000000000000001863587 # np.nextafter(np.tan(0.00000001), 1)
0.000000010000000000000000333... # True value
From this we can see that 0.000000010000000000000000209226 is the closest representation to the true value, but also that it's safe to round-trip this to 0.00000001, thus Python chooses to print it that way.

Increase float precision

I am developing a machine learning based algorithm on python. The main thing, that I need to calculate to solve this problem is probabilities. This way I have the following code:
class_ans = class_probability[current_class] * lambdas[current_class]
for word in appears_words:
if word in message:
class_ans *= words_probability[(word, current_class)]
else:
class_ans *= (1 - words_probability[(word, current_class)])
ans.append(class_ans)
ans[current_class] /= summ
It works, but in case the dataset is too big or lambdas value is too small, I ran out of my float precision.
I've tryed to research an other algorithm of calculating my answer's value, multimplying and dividing on some random consts different variables to make them not to overflow. Despite this, nothing helped.
This way, I would like to ask, is there any ways to increase my float precision in python?
Thanks!
You cannot. When using serious scientific computation where precision is key (and speed is not), consider the following two options:
Instead of using float, switch your datatype to decimal.Decimal and set your desired precision.
For a more battle-hardened thorough implementation, switch to gmpy2.mpfr as your data type.
However, if your entire computation (or at least the problematic part) involves the multiplication of factors, you can often bypass the need for the above by working in log-space as Konrad Rudolph suggests in the comments:
a * b * c * d * ... = exp(log(a) + log(b) + log(c) + log(d) + ...)

Python: Calculate sine/cosine with a precision of up to 1 million digits

Question is pretty self-explanatory. I've seen a couple of examples for pi but not for trigo functions. Maybe one could use a Taylor series as done here but I'm not entirely sure how to implement that in python. Especially how to store so many digits.
I should mention: this ideally would run on vanilla python i.e. no numpy etc.
Thanks!
Edit: as said, I know the question has been asked before but it's in java and I was looking for a python implementation :)
Edit 2: wow I wasn't aware people here can be so self-absorbed. I did try several approaches but none would work. I thought this a place you can ask for advice, guess I was wrong
last edit: For anyone who might find this useful: many angles can be calculated as a multiple of sqrt(2), sqrt(3) and Phi (1.61803..) Since those numbers are widely available with a precision up to 10mio digits, it's useful to have them in a file and read them in your program directly
mpmath is the way:
from mpmath import mp
precision = 1000000
mp.dps = precision
mp.cos(0.1)
If unable to install mpmath or any other module you could try polynomial approximation as suggested.
where Rn is the Lagrange Remainder
Note that Rn grows fast as soon as x moves away from the center x0, so be careful using Maclaurin series (Taylor series centered in 0) when trying to calculate sin(x) or cos(x) of arbitrary x.
Try this
import math
from decimal import *
def sin_taylor(x, decimals):
p = 0
getcontext().prec = decimals
for n in range(decimals):
p += Decimal(((-1)**n)*(x**(2*n+1)))/(Decimal(math.factorial(2*n+1)))
return p
def cos_taylor(x, decimals):
p = 0
getcontext().prec = decimals
for n in range(decimals):
p += Decimal(((-1)**n)*(x**(2*n)))/(Decimal(math.factorial(2*n)))
return p
if __name__ == "__main__":
ang = 0.1
decimals = 1000000
print('sin:', sin_taylor(ang, decimals))
print('cos:', cos_taylor(ang, decimals))
import math
x = .5
def sin(x):
sum = 0
for a in range(0,50): #this number (50) to be changed for more accurate results
sum+=(math.pow(-1,a))/(math.factorial(2*a+1))*(math.pow(x,2*a+1))
return sum
ans = sin(x)
print('{0:.15f}'.format(ans)) #change the 15 for more decimal places
Here is an example of implementing the Taylor series using python as you suggested above. Changing to cos wouldn't be too hard after that.
EDIT:
Added in the formatting of the last line in order to actual print out more decimal places.

How to do higher precision matrix exponential in python?

Is it possible to do higher precision matrix exponential in Python? I mean obtain higher precision than double floating numbers.
I have following testing code:
import sympy
from sympy import N
import random
n = 100
#A = sympy.Matrix([[random.random(),random.random()],
# [random.random(),random.random()]])
A = sympy.Matrix([[1,2],[3,4]])
dlt = 1000
e1 = A.exp()
e1 = N(e1, n)
ee2 = (A/dlt).exp()
ee2 = N(ee2, n)
e2 = sympy.eye(2)
for i in range(dlt):
e2 = e2*ee2
print(N(max(e1-e2)))
Theoretically, the final result should be zero. With scipy, the error is about 1e-14.
By sympy, if the matrix is like [[1,2],[3,4]], the output of previous code is about 1e-98. However, for random matrix, the error is around 1e-14.
Is it possible to get results like 1e-100 for random matrices?
Speed is not concern.
Once you use N, you are in the realm of floating point operations, as such you can never assume that you will reach absolute zero. This is the case with all floating point arithmetic, as discussed here and in many other places. The only reliable solution is to include a suitably chosen eps variable and a function to check.
So instead of checking result == 0 define isZero = lambda val: abs(val) < eps and check isZero(result).
This is a universal problem in floating point operations. In principle, using sympy, you can find real zeros because it is an algebra library, not a floating point math library. However, in the example you gave, not using N (which is what switches to float arithmetic), makes the computation extremely slow.
I made a mistake when trying mpmath.
I have tried mpmath again and it's a perfect solution for this problem.

Rounding errors in quadratic solver

I'm pretty new to python and am trying to write some code to solve a given quadratic function. I'm having some trouble with rounding errors in floats, I think because I am dividing two numbers that are very large with a very small difference. (Also I'm assuming all inputs have real solutions for now.) I've put two different versions of the quadratic equation to show my problem. It works fine for most inputs, but when I try a = .001, b = 1000, c = .001 I get two answers that have a significant difference. Here is my code:
from math import sqrt
a = float(input("Enter a: "))
b = float(input("Enter b: "))
c = float(input("Enter c: "))
xp = (-b+sqrt(b**2-4*a*c))/(2*a)
xn = (-b-sqrt(b**2-4*a*c))/(2*a)
print("The solutions are: x = ",xn,", ",xp,sep = '')
xp = (2*c)/(-b-sqrt(b**2-4*a*c))
xn = (2*c)/(-b+sqrt(b**2-4*a*c))
print("The solutions are: x = ",xn,", ",xp,sep = '')
I'm no expert in the maths field but I believe you should use numpy (a py module for maths), due to internal number representation on computers your calculus will not match real math. (floating point arithmetics)
http://docs.python.org/2/tutorial/floatingpoint.html
Check this is almost exaclty what you want.
http://www.annigeri.in/2012/02/python-class-for-quadratic-equations.html
To get more precise results with floating point, be careful not to subtract similar quantities. For the quadratic x^2 + a x + b = 0 you know that the roots x1 and x2 make
b = x1 * x2
Compute the one with larger absolute value, and get the other one from this relation.
Solutions:
Numpy as suggested by user dhunter is usually the best solution for math in python. The numpy libraries are capable of doing quick and accurate math in a number of different fields.
Decimal data types were added in python 2.4 If you do not want to download an external library and do not anticipate doing many long or complex equations, decimal datatypes may fit the bill.
Simply add:
from decimal import *
to the top of your code and then replace all instances of the word float with the word Decimal (note the uppercase "D".)
Ex: Decimal(1.1047262519) as opposed to float(1.1047262519)
Theory:
Float arithmetic is based off of binary math and is therefore not always exactly what a user would expect. An excelent description of the float Vs. decimal types is located Here
The previously-mentioned numpy module is not particularly relevant to the rounding error mentioned in the question. On the other hand, the decimal module can be used in a brute-force manner to get accurate computations. The following snippet from an ipython interpreter session illustrates its use (with default 28-digit accuracy), and also shows that the corresponding floating-point calculation only has 5 decimal places of accuracy.
In [180]: from decimal import Decimal
In [181]: a=Decimal('0.001'); b=Decimal('1000'); c=Decimal('0.001')
In [182]: (b*b - 4*a*c).sqrt()
Out[182]: Decimal('999.9999999979999999999980000')
In [183]: b-(b*b - 4*a*c).sqrt()
Out[183]: Decimal('2.0000000000020000E-9')
In [184]: a = .001; b = 1000; c = .001
In [185]: math.sqrt(b*b - 4*a*c)
Out[185]: 999.999999998
In [186]: b-math.sqrt(b*b - 4*a*c)
Out[186]: 1.999978849198669e-09
In [187]: 2*a*c/b
Out[187]: 1.9999999999999997e-09
Taylor series for the square root offers an alternative method to use when 4ac is tiny compared to b**2. In this case, √(b*b-4*a*c) ≈ b - 4*a*c/(2*b), whence b - √(b*b-4*a*c) ≈ 2*a*c/b. As can be seen in the line [187] entries above, Taylor series computation gives a 12-digits-accurate result while using floating point instead of Decimal. Using another Taylor series term might add a couple more digits of accuracy.
There are special cases that you should deal with:
a == 0 means a linear equation and one root: x = -c/b
b == 0 means two roots of the form x1, x2 = ±sqrt(-c/a)
c == 0 means two roots, but one of them is zero: x*(ax+b) = 0
If the discriminant is negative, you have two complex conjugate roots.
I'd recommend calculating the discriminant this way:
discriminant = b*sqrt(1.0-4.0*a*c/b)
I'd also recommend reading this:
https://math.stackexchange.com/questions/187242/quadratic-equation-error

Categories