Sum of absolute values of polynomials with python numpy - python

Here's what I wrote: it's a classical exercise on interpolation, which I already finished and sent. I was wondering if there was another (longer) way...
q is a list of floats (the points of interpolation)
i is the index of the Lagrange polynomial
x is the point where is evaluated:
def l(q,i,x):
poly=1.0
for j,p in enumerate(q):
if j==i:
continue
poly *=(x-p)/(q[i]-p)
return poly
Then there is the function on which I'm working:
def Lambda(q,x):
value=0.0
for j in range(0,len(q)):
value+=abs(l(q,j,x))
return value
Now I can use some routines of python to find it's maxium value in the interval [0,1] and I did.
In python there is a polynomial module, with which I can easily re-define l:
import numpy.polynomial.polynomial as P
def l_poly(q,i):
poly = []
for j,p in enumerate(q):
if j==i:
continue
poly.append(p/(q[i]-p))
return P.polyfromroots(poly)
I'd like to do the same with Lambda so that I can find its maximum using the built in function of the derivative (find its zeros and so on and so forth). The problem is that it is a sum of abs(polynomials). Is there a way to do this? Or to mix the polynomial derivative and the derivative of abs(...)?

NumPy does not support arbitrary symbolic expression. It works only with polynomials, representing a polynomial as an array of coefficients. The absolute value of a polynomial is not a polynomial, so it is not a concept that NumPy has. It's a symbolic expression that can be handled by symbolic manipulation libraries like SymPy.
using the built in function of the derivative (find its zeros and so on and so forth).
There are several problems with this:
As said before, the polyder method of NumPy does not apply to this situation, since abs(polynomial) is not a polynomial.
The derivative of absolute function is undefined at 0.
The minimum or maximum of an expression involving absolute values may be attained where the derivative does not exist, so even if you could find the derivative, and somehow find its roots, you still would not solve the problem.
Looking for zeros of derivative is not a good way to minimize or maximize a function, outside of calculus exercises. Libraries like scipy.optimize implement many efficient numerical methods for this kind of problems.

Related

Numerical Solver in Python is not able to find a solution

I broke my problem down as follows. I am not able to solve the following equation with Python 3.9 in a meaningful way, instead it always stops with the initial_guess for small lambda_ < 1. Is there an alternative algorithm that can handle the error function better? Or can I force fsolve to search until a solution is found?
import numpy as np
from scipy.special import erfcinv, erfc
from scipy.optimize import root, fsolve
def Q(x):
return 0.5*erfc(x/np.sqrt(2))
def Qinvers(x):
return np.sqrt(2)*erfcinv(2*x)
def epseqn(epsilon2):
lambda_ = 0.1
return Q(lambda_*Qinvers(epsilon2))
eps1 = fsolve(epseqn, 1e-2)
print(eps1)
I tried root and fsolve to get a solution. Especially for the gaussian error function I do not find a solution that converges.
root and fsolve can be used to find the roots of a function defined by f(x)=0. Since your outer function, which is basically erfc(x), has no root (it only it approaches the x-axis asymptotically from positive values) the solvers are not able to find one. Real function arguments are assumed like you did.
Before blindly starting with numerical calculations, I would recommend to think about any constraints of your function.
You will find out, that your function is only defined for values between zero and one. If you assume that there is only a single root in this interval, I would recommend to use an interval search method like brentq, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html#scipy.optimize.brentq and https://en.wikipedia.org/wiki/Brent%27s_method.
However, you could instead think further and/or just plot your function, e.g. using matplotlib
import matplotlib.pyplot as plt
x = np.linspace(0, 1, 1000)
y = epseqn(x)
plt.plot(x, y)
plt.show()
There you will see that the root is at zero, which makes sense when looking at your functions, because the inverse cumulative error function is minus infinity at zero and the regular error function gives you zero at minus infinity (mathematically in the limit sense, but numerically those functions are also defined for such input values). So without any numeric calculation, you can get the root value.

`np.linalg.solve` get solution matrix?

np.linalg.solve solves for x in a problem of the form Ax = b.
For my application, this is done to avoid calculating the inverse explicitly (i.e inverse(A)b = x)
I'd like to access what the effective inverse is that was used to solve this problem but looking at the documentation it doesn't appear to be an option... Is there a reasonable alternative approach I can follow to recover the inverse of A?
(np.linalg.inv(A) is not accurate enough for my use case)
Following the docs and source code, it seems NumPy is calling LAPACK's _gesv to compute the solution, the documentation of which reads:
The routine solves for X the system of linear equations A*X = B, where
A is an n-by-n matrix, the columns of matrix B are individual
right-hand sides, and the columns of X are the corresponding
solutions.
The LU decomposition with partial pivoting and row interchanges is
used to factor A as A = P * L * U, where P is a permutation matrix, L is
unit lower triangular, and U is upper triangular. The factored form of
A is then used to solve the system of equations A * X = B.
The NumPy implementation for solve doesn't return the inverted matrix back to the caller, and just frees the memory for the inverted matrix, so there's no hope there. SciPy provides low-level access to LAPACK so you should be able to access the result from there. You can follow the actual implementation in LAPACK's Fortran source code dgesv.f, dgetrf.f and dgetrs.f. Alternatively, you could note that NumPy's inv still calls the same underlying code, so it might be enough for your use case... You didn't specify why is it that you need the approximate inverse matrix.

Discrete Fourier transform for odd function

I have an initial function u(x,0) = -sin(x) and I want to derive the FFT coefficients for an odd-parity solution in the form of u(x,t) = $\sum_{k \geq 1} a_{k} sin (kx)$. I tried using the normal expansion of the function in terms of $\exp{ikx}$ but it adds some error to the solution.
Can anyone suggest me the procedure of how to filter the Fourier coefficients which remains odd throughout the solution using numpy.fft.fft ?
If the function is inherently odd (like the sine functions) then only the imaginary part of the fft function will be non-zero. I think your problem is that your function is not periodic as it should be, you should exclude the last point:
import numpy as np
x=np.linspace(-np.pi,np.pi,50,endpoint=False)
y=-np.sin(x)
yf=np.fft.fft(y)
even_part=yf.real
odd_part=yf.imag
Here only odd_part[1] is non-zero.
If your function is not odd and you want to force it, you can either use sdt as I mentioned in the comments, or add the inverse of your function on left side then use fft.
Another point, if your input is not complex, then it's faster and more time efficient to use rfft

fsolve problems with the starting point

I'm using fsolve in order to solve a non linear equation. My problem is that, depending on the starting point the solutions change and I am not sure that the ones that I found are the most reasonable.
This is the code
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve, brentq,newton
A = np.arange(0.05,0.95,0.01)
PHI = np.deg2rad(np.arange(0,90,1))
def f(b):
return np.angle((1+3*a**4-3*a**2)+(a**4-a**6)*(np.exp(2j*b)+2*np.exp(-1j*b))+(a**2-2*a**4+a**6)*(np.exp(-2j*b)+2*np.exp(1j*b)))-Phi
B = np.zeros((len(A),len(PHI)))
for i in range(len(A)):
for j in range(len(PHI)):
a = A[i]
Phi = PHI[j]
b = fsolve(f, 1)
B[i,j]= b
I fixed x0 = 1 because it seems to give the more reasonable values. But sometimes, I think the method doesn't converge and the resulting values are too big.
What can I do to find the best solution?
Many thanks!
The eternal issue with turning non-linear solvers loose is having a really good understanding of your function, your initial guess, the solver itself, and the problem you are trying to address.
I note that there are many (a,Phi) combinations where your function does not have real roots. You should do some math, directed by the actual problem you are trying to solve, and determine where the function should have roots. Not knowing the actual problem, I can't do that for you.
Also, as noted on a (since deleted) answer, this is cyclical on b, so using a bounded solver (such as scipy.optimize.minimize using method='L-BFGS-B' might help to keep things under control. Note that to find roots with a minimizer you use the square of your function. If the found minimum is not close to zero (for you to define based on the problem), the real minima might be a complex conjugate pair.
Good luck.

Is it possible to invert an arbitrary lambda in Python?

I have been playing around with Python and math lately, and I ran in to something I have yet to be able to figure out. Namely, is it possible, given an arbitrary lambda, to return the inverse of that lambda for mathematical operations? That is, invertLambda such that invertLambda(lambda x:(x+2))(2) = 0. The fact that lambdas are restricted to expressions gives me hope, but so far I have not been able to make it work. I understand that any result would have problems with functions that lose information, but I am willing to restrict users and myself to lossless functions if I have to.
Of course not: if lambda is not an injective function, you cannot invert it. Example: you cannot invert lambda mapping x to x*x, since the sign of the original x is lost.
Leaving injectivity aside, there are functions which are computationally very complex to invert. Consider, for example, restoring the original value from its md5 hash. (For a lambda calculating md5 hash, inverted function must break md5 in cryptological sense!)
Edit:
indeed, we can theoretically make lambdas invertable if we restrict the expressions which can be used there. For example, if the lambda is a linear function of 1 argument, we can easily invert it. If it's a polynomial of degree > 4, we have a problem with algebraically exact solution.
Of course, we could refrain from exact solution, and just invert the function numerically. This is possible, using, well, any method of numerical solving of the equation lambda(x) = value will do (the simplest be binary search).
I am a bit late, but I just published a python package that does this precisely. You may want to borrow some ideas from it:
https://pypi.python.org/pypi/pynverse
It essentially follows this strategy:
Figure out if the function is increasing or decreasing. For this two reference points ref1 and ref2 are needed:
In case of a finite interval, the points ref points are 1/4 and 3/4 through the interval.
In an infinite interval any two values work really.
If f(ref1) < f(ref2), the function is increasing, otherwise is decreasing.
Figure out the image of the function in the interval.
If values are provided, then those are used.
In a closed interval just calculate f(a) and f(b), where a and b are the ends of the interval.
In an open interval try to calculate f(a) and f(b), if this works those are used, otherwise it will be assume to be (-Inf, Inf).
Built a bounded function with the following conditions:
bounded_f(x):
return -Inf if x below interval, and f is increasing.
return +Inf if x below interval, and f is decreasing.
return +Inf if x above interval, and f is increasing.
return -Inf if x above interval, and f is decreasing.
return f(x) otherwise
If the required number y0 for the inverse is outside the image, raise an exception.
Find roots for bounded_f(x)-y0, by minimizing (bounded_f(x)-y0)**2, using the Brent method, making sure that the algorithm for minimising starts in a point inside the original interval by setting ref1, ref2 as brackets. As soon as if goes outside the allowed intervals, bounded_f returns infinite, forcing the algorithm to go back to search inside the interval.
Check that the solutions are accurate and they meet f(x0)=y0 to some desired precision, raising a warning otherwise.
Of course, as Vlad pointed out, the function has to be invertible for the inverse to exist, and also continuous in the domain for this to work.

Categories