Is it possible to do higher precision matrix exponential in Python? I mean obtain higher precision than double floating numbers.
I have following testing code:
import sympy
from sympy import N
import random
n = 100
#A = sympy.Matrix([[random.random(),random.random()],
# [random.random(),random.random()]])
A = sympy.Matrix([[1,2],[3,4]])
dlt = 1000
e1 = A.exp()
e1 = N(e1, n)
ee2 = (A/dlt).exp()
ee2 = N(ee2, n)
e2 = sympy.eye(2)
for i in range(dlt):
e2 = e2*ee2
print(N(max(e1-e2)))
Theoretically, the final result should be zero. With scipy, the error is about 1e-14.
By sympy, if the matrix is like [[1,2],[3,4]], the output of previous code is about 1e-98. However, for random matrix, the error is around 1e-14.
Is it possible to get results like 1e-100 for random matrices?
Speed is not concern.
Once you use N, you are in the realm of floating point operations, as such you can never assume that you will reach absolute zero. This is the case with all floating point arithmetic, as discussed here and in many other places. The only reliable solution is to include a suitably chosen eps variable and a function to check.
So instead of checking result == 0 define isZero = lambda val: abs(val) < eps and check isZero(result).
This is a universal problem in floating point operations. In principle, using sympy, you can find real zeros because it is an algebra library, not a floating point math library. However, in the example you gave, not using N (which is what switches to float arithmetic), makes the computation extremely slow.
I made a mistake when trying mpmath.
I have tried mpmath again and it's a perfect solution for this problem.
Related
Entailed by the fundamental theorem of algebra is the existence of n complex roots for the formula z^n=a where a is a real number, n is a positive integer, and z is a complex number. Some roots will also be real in addition to complex (i.e. a+bi where b=0).
One example where there are multiple real roots is z^2=1 where we obtain z = ±sqrt(1) = ± 1. The solution z = 1 is immediate. The solution z = -1 is obtained by z = sqrt(1) = sqrt(-1 * -1) = I * I = -1, which I is the imaginary unit.
In Python/NumPy (as well as many other programming languages and packages) only a single value is returned. Here are two examples for 5^{1/3}, which has 3 roots.
>>> 5 ** (1 / 3)
1.7099759466766968
>>> import numpy as np
>>> np.power(5, 1/3)
1.7099759466766968
It is not a problem for my use case that only one of the possible roots are returned, but it would be informative to know 'which' root is systematically calculated in the contexts of Python and NumPy. Perhaps there is an (ISO) standard stating which root should be returned, or perhaps there is a commonly-used algorithm that happens to return a specific root. I've imagined of an equivalence class such as "the maximum of the real-valued solutions", but I do not know.
Question: When I take an nth root in Python and NumPy, which of the n existing roots do I actually get?
Since typically the idenity xᵃ = exp(a⋅log(x)) is used to define the general power, you'll get the root corresponding to the chosen branch cut of the complex logarithm.
With regards to this, the numpy documentation says:
For real-valued input data types, log always returns real output. For each value that cannot be expressed as a real number or infinity, it yields nan and sets the invalid floating point error flag.
For complex-valued input, log is a complex analytical function that has a branch cut [-inf, 0] and is continuous from above on it. log handles the floating-point negative zero as an infinitesimal negative number, conforming to the C99 standard.
So for example, np.power(-1 +0j, 1/3) = 0.5 + 0.866j = np.exp(np.log(-1+0j)/3).
Question is pretty self-explanatory. I've seen a couple of examples for pi but not for trigo functions. Maybe one could use a Taylor series as done here but I'm not entirely sure how to implement that in python. Especially how to store so many digits.
I should mention: this ideally would run on vanilla python i.e. no numpy etc.
Thanks!
Edit: as said, I know the question has been asked before but it's in java and I was looking for a python implementation :)
Edit 2: wow I wasn't aware people here can be so self-absorbed. I did try several approaches but none would work. I thought this a place you can ask for advice, guess I was wrong
last edit: For anyone who might find this useful: many angles can be calculated as a multiple of sqrt(2), sqrt(3) and Phi (1.61803..) Since those numbers are widely available with a precision up to 10mio digits, it's useful to have them in a file and read them in your program directly
mpmath is the way:
from mpmath import mp
precision = 1000000
mp.dps = precision
mp.cos(0.1)
If unable to install mpmath or any other module you could try polynomial approximation as suggested.
where Rn is the Lagrange Remainder
Note that Rn grows fast as soon as x moves away from the center x0, so be careful using Maclaurin series (Taylor series centered in 0) when trying to calculate sin(x) or cos(x) of arbitrary x.
Try this
import math
from decimal import *
def sin_taylor(x, decimals):
p = 0
getcontext().prec = decimals
for n in range(decimals):
p += Decimal(((-1)**n)*(x**(2*n+1)))/(Decimal(math.factorial(2*n+1)))
return p
def cos_taylor(x, decimals):
p = 0
getcontext().prec = decimals
for n in range(decimals):
p += Decimal(((-1)**n)*(x**(2*n)))/(Decimal(math.factorial(2*n)))
return p
if __name__ == "__main__":
ang = 0.1
decimals = 1000000
print('sin:', sin_taylor(ang, decimals))
print('cos:', cos_taylor(ang, decimals))
import math
x = .5
def sin(x):
sum = 0
for a in range(0,50): #this number (50) to be changed for more accurate results
sum+=(math.pow(-1,a))/(math.factorial(2*a+1))*(math.pow(x,2*a+1))
return sum
ans = sin(x)
print('{0:.15f}'.format(ans)) #change the 15 for more decimal places
Here is an example of implementing the Taylor series using python as you suggested above. Changing to cos wouldn't be too hard after that.
EDIT:
Added in the formatting of the last line in order to actual print out more decimal places.
I want to solve an equation using scipy.optimise
I want to find the solution, n, for the equation
a**n + b**n = c**n
where
a=2.3
b=2.4
c=2.94
I have a list of triplets (a,b,c) I want to experiment with and I know the range of the exponent n will always be 2.0 < n < 4.0. Could I use this fact to speed up the convergence of the solution.
If your function is scalar, and accepts a scalar (your case), and if you know that:
your solution is in a given interval, and the function is continuous in the same interval (your case)
you are interested in one solution, not necessarily in all (if more than 1) solutions in that interval
You can speed up the solution using the bisection algorithm, implemented here in scipy, which requires the conditions above to guarantee convergence.
The idea behind the algorithm is quite simple, with log convergence.
See this fundamental calculus theorem on which the algorithm is based.
EDIT: I couldn't resist, here you have a MWE
import scipy.optimize as opt
def sol(a,b,c):
f = lambda n : a**n + b**n - c**n
return opt.bisect(f,2,4)
print(sol(2.3,2.4,2.94)
>3.1010655957
As requested in the comments, here's how to do it using mpmath.
We supply the a, b, c parameters as strings rather than as Python floats for maximum accuracy. Converting strings to mpf (mp floats) will be as accurate as the current precision allows. If instead we convert from Python floats then we'd be using numbers that suffer from the imprecision inherent in Python floats.
mp.dps allows us to set the precision in the form of the number of decimal digits.
The mpmath findroot function accepts an initial approximation argument. This can be a single value, or it may be an interval, given as a list or a tuple. It's ok to use Python floats in that interval.
from mpmath import mp
mp.dps = 30
a, b, c = [mp.mpf(u) for u in ('2.3', '2.4', '2.94')]
def f(x):
return a**x + b**x - c**x
x = mp.findroot(f, [2, 4])
print(x, f(x))
output
3.10106559575904097402104750305 -3.15544362088404722164691426113e-30
By default, findroot uses a simple secant solver. The docs recommend using the 'anderson' or 'ridder' solvers when supplying an interval, but for this equation all 3 solvers give identical results.
I'm using a bissection algorithm to find the maximum of a function.
Here is my code:
from math import exp
h = 6.62606876e-34
c = 2.99792458e8
k = 1.3806504e-23
T = 7000
epsilon = 1.e-10
def fct(lam) :
return 2*h**2*c**3*exp(h*c/(lam*k*T))/(k*T*lam**7*(exp(h*c/(lam*k*T)) - 1)) - 10*c**2*h/(lam**6*(exp(h*c/(lam*k*T)) - 1))
lam1 = 100.e-9
lam2 = 1000.e-9
delta = lam2 - lam1
f1 = fct(lam1)
f2 = fct(lam2)
while delta > epsilon :
lamm = 0.5*(lam1 + lam2)
fm = fct(lamm)
f2=fct(lam2)
if fm*f2 > 0 :
lam2 = lamm
else :
lam1 = lamm
delta = lam2 - lam1
print('racine de la fonction: {}+/-{}'.format(lamm,delta))
The problem is that because of the finite precision of floats, I get a division by zero error when the line fm = fct(lamm) is evaluated.
How can I fix this issue?
I think your best bet for doing math in python is using one of the many math processing libraries. https://stackoverflow.com/a/13442742/2534876 suggests mpmath for your floating point issues. bigfloat could possibly be sufficient. numpy also has stuff like float128 for this purpose.
Dig around and find something you like.
I don't have a great answer. One technique would be to recale your units, say set c=1 if you're dealing with large quantities, do your computation then express your answer back in the unscaled system. You'd have to do something analogous if you're dealing with small units.
Another solution which I've just discovered is the mpmath module, which claims to be able to do computations up to 1000 units of precision! Their website is http://mpmath.org, I hope this works out for you.
One possibility is to transform your code into using exact arithmetics for all but the transcendental functions (exp in your example), for example using https://docs.python.org/2/library/fractions.html. Certainly, inputs and outputs of transcendental functions need to be converted.
I'm pretty new to python and am trying to write some code to solve a given quadratic function. I'm having some trouble with rounding errors in floats, I think because I am dividing two numbers that are very large with a very small difference. (Also I'm assuming all inputs have real solutions for now.) I've put two different versions of the quadratic equation to show my problem. It works fine for most inputs, but when I try a = .001, b = 1000, c = .001 I get two answers that have a significant difference. Here is my code:
from math import sqrt
a = float(input("Enter a: "))
b = float(input("Enter b: "))
c = float(input("Enter c: "))
xp = (-b+sqrt(b**2-4*a*c))/(2*a)
xn = (-b-sqrt(b**2-4*a*c))/(2*a)
print("The solutions are: x = ",xn,", ",xp,sep = '')
xp = (2*c)/(-b-sqrt(b**2-4*a*c))
xn = (2*c)/(-b+sqrt(b**2-4*a*c))
print("The solutions are: x = ",xn,", ",xp,sep = '')
I'm no expert in the maths field but I believe you should use numpy (a py module for maths), due to internal number representation on computers your calculus will not match real math. (floating point arithmetics)
http://docs.python.org/2/tutorial/floatingpoint.html
Check this is almost exaclty what you want.
http://www.annigeri.in/2012/02/python-class-for-quadratic-equations.html
To get more precise results with floating point, be careful not to subtract similar quantities. For the quadratic x^2 + a x + b = 0 you know that the roots x1 and x2 make
b = x1 * x2
Compute the one with larger absolute value, and get the other one from this relation.
Solutions:
Numpy as suggested by user dhunter is usually the best solution for math in python. The numpy libraries are capable of doing quick and accurate math in a number of different fields.
Decimal data types were added in python 2.4 If you do not want to download an external library and do not anticipate doing many long or complex equations, decimal datatypes may fit the bill.
Simply add:
from decimal import *
to the top of your code and then replace all instances of the word float with the word Decimal (note the uppercase "D".)
Ex: Decimal(1.1047262519) as opposed to float(1.1047262519)
Theory:
Float arithmetic is based off of binary math and is therefore not always exactly what a user would expect. An excelent description of the float Vs. decimal types is located Here
The previously-mentioned numpy module is not particularly relevant to the rounding error mentioned in the question. On the other hand, the decimal module can be used in a brute-force manner to get accurate computations. The following snippet from an ipython interpreter session illustrates its use (with default 28-digit accuracy), and also shows that the corresponding floating-point calculation only has 5 decimal places of accuracy.
In [180]: from decimal import Decimal
In [181]: a=Decimal('0.001'); b=Decimal('1000'); c=Decimal('0.001')
In [182]: (b*b - 4*a*c).sqrt()
Out[182]: Decimal('999.9999999979999999999980000')
In [183]: b-(b*b - 4*a*c).sqrt()
Out[183]: Decimal('2.0000000000020000E-9')
In [184]: a = .001; b = 1000; c = .001
In [185]: math.sqrt(b*b - 4*a*c)
Out[185]: 999.999999998
In [186]: b-math.sqrt(b*b - 4*a*c)
Out[186]: 1.999978849198669e-09
In [187]: 2*a*c/b
Out[187]: 1.9999999999999997e-09
Taylor series for the square root offers an alternative method to use when 4ac is tiny compared to b**2. In this case, √(b*b-4*a*c) ≈ b - 4*a*c/(2*b), whence b - √(b*b-4*a*c) ≈ 2*a*c/b. As can be seen in the line [187] entries above, Taylor series computation gives a 12-digits-accurate result while using floating point instead of Decimal. Using another Taylor series term might add a couple more digits of accuracy.
There are special cases that you should deal with:
a == 0 means a linear equation and one root: x = -c/b
b == 0 means two roots of the form x1, x2 = ±sqrt(-c/a)
c == 0 means two roots, but one of them is zero: x*(ax+b) = 0
If the discriminant is negative, you have two complex conjugate roots.
I'd recommend calculating the discriminant this way:
discriminant = b*sqrt(1.0-4.0*a*c/b)
I'd also recommend reading this:
https://math.stackexchange.com/questions/187242/quadratic-equation-error