I have seen this amazing example.
But I need to solve system with boundaries on X and F, for example:
f1 = x+y^2 = 0
f2 = e^x+ xy = 0
-5.5< x <0.18
2.1< y < 10.6
# 0.15< f1 <20.5 - not useful for this example
# -10.5< f2 < -0.16 - not useful for this example
How could I set this boundary constrains to fsolve() of scipy? Or may be there is some other method?
Would You give me a Simple code example?
I hope this will serve you as a starter. It was all there.
import numpy as np
from scipy.optimize import minimize
def my_fun(z):
x = z[0]
y = z[1]
f = np.zeros(2)
f[0] = x + y ** 2
f[1] = np.exp(x) + x * y
return np.dot(f,f)
def my_cons(z):
x = z[0]
y = z[1]
f = np.zeros(4)
f[0] = x + 5.5
f[1] = 0.18 - x
f[2] = y - 2.1
f[3] = 10.6 - y
return f
cons = {'type' : 'ineq', 'fun': my_cons}
res = minimize(my_fun, (2, 0), method='SLSQP',\
constraints=cons)
res
status: 0
success: True
njev: 7
nfev: 29
fun: 14.514193585986144
x: array([-0.86901099, 2.1 ])
message: 'Optimization terminated successfully.'
jac: array([ -2.47001648e-04, 3.21871972e+01, 0.00000000e+00])
nit: 7
EDIT: As a response to the comments: If your function values f1 and f2 are not zero you just have to rewrite the equations e.g:
f1 = -6 and f2 = 3
Your function to minimize will be:
def my_fun(z):
x = z[0]
y = z[1]
f = np.zeros(2)
f[0] = x + y ** 2 + 6
f[1] = np.exp(x) + x * y -3
return np.dot(f,f)
It depends on the system, but here you can simply check the constraints afterwards.
First solve your nonlinear system to get one/none/several solutions of the form (x,y). Then check which, if any, of these solutions, satisfy the constraints.
Related
Assume the following function:
f(x) = x * cos(x-4)
With x = [-2.5, 2.5] this function crosses 0 at f(0) = 0 and f(-0.71238898) = 0.
This was determined with the following code:
import math
from scipy.optimize import fsolve
def func(x):
return x*math.cos(x-4)
x0 = fsolve(func, 0.0)
# returns [0.]
x0 = fsolve(func, -0.75)
# returns [-0.71238898]
What is the proper way to use fzero (or any other Python root finder) to find both roots in one call? Is there a different scipy function that does this?
fzero reference
I once wrote a module for this task. It's based on chapter 4.3 from the book Numerical Methods in Engineering with Python by Jaan Kiusalaas:
import math
def rootsearch(f,a,b,dx):
x1 = a; f1 = f(a)
x2 = a + dx; f2 = f(x2)
while f1*f2 > 0.0:
if x1 >= b:
return None,None
x1 = x2; f1 = f2
x2 = x1 + dx; f2 = f(x2)
return x1,x2
def bisect(f,x1,x2,switch=0,epsilon=1.0e-9):
f1 = f(x1)
if f1 == 0.0:
return x1
f2 = f(x2)
if f2 == 0.0:
return x2
if f1*f2 > 0.0:
print('Root is not bracketed')
return None
n = int(math.ceil(math.log(abs(x2 - x1)/epsilon)/math.log(2.0)))
for i in range(n):
x3 = 0.5*(x1 + x2); f3 = f(x3)
if (switch == 1) and (abs(f3) >abs(f1)) and (abs(f3) > abs(f2)):
return None
if f3 == 0.0:
return x3
if f2*f3 < 0.0:
x1 = x3
f1 = f3
else:
x2 =x3
f2 = f3
return (x1 + x2)/2.0
def roots(f, a, b, eps=1e-6):
print ('The roots on the interval [%f, %f] are:' % (a,b))
while 1:
x1,x2 = rootsearch(f,a,b,eps)
if x1 != None:
a = x2
root = bisect(f,x1,x2,1)
if root != None:
pass
print (round(root,-int(math.log(eps, 10))))
else:
print ('\nDone')
break
f=lambda x:x*math.cos(x-4)
roots(f, -3, 3)
roots finds all roots of f in the interval [a, b].
Define your function so that it can take either a scalar or a numpy array as an argument:
>>> import numpy as np
>>> f = lambda x : x * np.cos(x-4)
Then pass a vector of arguments to fsolve.
>>> x = np.array([0.0, -0.75])
>>> fsolve(f,x)
array([ 0. , -0.71238898])
In general (i.e. unless your function belongs to some specific class) you can't find all the global solutions - these methods usually do local optimization from given starting points.
However, you can switch math.cos() with numpy.cos() and that will vectorize your function so it can solve for many values at once, e.g. fsolve(func, np.arange(-10,10,0.5)).
I'm receiving an error with this simple code, the problem is that the error only appears with one of the equations that I need (78 * x**0.3 * y**0.8 - 376).
The error : invalid value encountered in double_scalars ; F[0] = 78 * x**0.3 * y**0.8 - 376
If I erase * y**0.8 from the first equation, the code runs perfectly, but obviously it doesn't work for me.
Code:
import numpy as np
from scipy.optimize import fsolve
def Funcion(z):
x = z[0]
y = z[1]
F = np.empty((2))
F[0] = 78 * x**0.3 * y**0.8 - 376
F[1] = 77 * x**0.5 * y - 770
return F
zGuess = np.array([1,1])
z = fsolve(Funcion,zGuess)
print(z)
F[0] will be complex if y is negative. fsolve doesn't support complex root finding.
You need to solve the nonlinear equation system F(x,y) = 0 subject to x, y >= 0, which is equivalent to minimize the euclidean norm ||F(x,y)|| s.t. x,y >= 0. To solve this constrained optimization problem, you can use scipy.optimize.minimize as follows:
import numpy as np
from scipy.optimize import minimize
def Funcion(z):
x = z[0]
y = z[1]
F = np.empty((2))
F[0] = 78 * x**0.3 * y**0.8 - 376
F[1] = 77 * x**0.5 * y - 770
return F
# initial point
zGuess = np.array([1.0, 1.0])
# bounds x, y >= 0
bounds = [(0, None), (0, None)]
# Solve the constrained optimization problem
z = minimize(lambda z: np.linalg.norm(Funcion(z)), x0=zGuess, bounds=bounds)
# Print the solution
print(z.x)
I'm trying to solve the following coupled equations:
x = 1;
y - 0.5*y - 0.7*v = 0;
w - 0.7*x - 0.5*x = 0;
v = 1.
(I know that the equations = 1 seem unnecessary but I need them for a later generalization of the code).
My code is the following:
import numpy as np
from scipy.optimize import fsolve
def myFunction(z):
x = z[0]
y = z[1]
w = z[2]
v = z[3]
F = np.empty((4))
F[0] = 1
F[1] = y - 0.5*y - 0.7*v
F[2] = w - 0.7*x - 0.5*w
F[3] = 1
return F
zGuess = np.array([1,2.5,2.5,1])
z = fsolve(myFunction,zGuess)
print(z)
The answer I get is [-224.57569869, -314.40597772, -314.40597817, -224.57569837], but I would expect [1, 1.4, 1.4, 1]. Why does fsolve is unable to find the answer to this simple set of equations? What's more: why the values of x and v, which are not modified at any point, are not the same as those of the initial guess?
To convert the requirement x = 1 to code, rewrite it as x - 1 = 0. That says that the line F[0] = 1 should be changed to F[0] = x - 1. Similarly, the line F[3] = 1 should be F[3] = v - 1.
Are there any pure-python implementations of the inverse error function?
I know that SciPy has scipy.special.erfinv(), but that relies on some C extensions. I'd like a pure python implementation.
I've tried writing my own using the Wikipedia and Wolfram references, but it always seems to diverge from the true value when the arg is > 0.9.
I've also attempted to port the underlying C code that Scipy uses (ndtri.c and the cephes polevl.c functions) but that's also not passing my unit tests.
Edit: As requested, I've added the ported code.
Docstrings (and doctests) have been removed because they're longer than the functions. I haven't yet put much effort into making the port more pythonic - I'll worry about that once I get something that passes unit tests.
Supporting functions from cephes polevl.c
def polevl(x, coefs, N):
ans = 0
power = len(coefs) - 1
for coef in coefs[:N]:
ans += coef * x**power
power -= 1
return ans
def p1evl(x, coefs, N):
return polevl(x, [1] + coefs, N)
Main Inverse Error Function
def inv_erf(z):
if z < -1 or z > 1:
raise ValueError("`z` must be between -1 and 1 inclusive")
if z == 0:
return 0
if z == 1:
return math.inf
if z == -1:
return -math.inf
# From scipy special/cephes/ndrti.c
def ndtri(y):
# approximation for 0 <= abs(z - 0.5) <= 3/8
P0 = [
-5.99633501014107895267E1,
9.80010754185999661536E1,
-5.66762857469070293439E1,
1.39312609387279679503E1,
-1.23916583867381258016E0,
]
Q0 = [
1.95448858338141759834E0,
4.67627912898881538453E0,
8.63602421390890590575E1,
-2.25462687854119370527E2,
2.00260212380060660359E2,
-8.20372256168333339912E1,
1.59056225126211695515E1,
-1.18331621121330003142E0,
]
# Approximation for interval z = sqrt(-2 log y ) between 2 and 8
# i.e., y between exp(-2) = .135 and exp(-32) = 1.27e-14.
P1 = [
4.05544892305962419923E0,
3.15251094599893866154E1,
5.71628192246421288162E1,
4.40805073893200834700E1,
1.46849561928858024014E1,
2.18663306850790267539E0,
-1.40256079171354495875E-1,
-3.50424626827848203418E-2,
-8.57456785154685413611E-4,
]
Q1 = [
1.57799883256466749731E1,
4.53907635128879210584E1,
4.13172038254672030440E1,
1.50425385692907503408E1,
2.50464946208309415979E0,
-1.42182922854787788574E-1,
-3.80806407691578277194E-2,
-9.33259480895457427372E-4,
]
# Approximation for interval z = sqrt(-2 log y ) between 8 and 64
# i.e., y between exp(-32) = 1.27e-14 and exp(-2048) = 3.67e-890.
P2 = [
3.23774891776946035970E0,
6.91522889068984211695E0,
3.93881025292474443415E0,
1.33303460815807542389E0,
2.01485389549179081538E-1,
1.23716634817820021358E-2,
3.01581553508235416007E-4,
2.65806974686737550832E-6,
6.23974539184983293730E-9,
]
Q2 = [
6.02427039364742014255E0,
3.67983563856160859403E0,
1.37702099489081330271E0,
2.16236993594496635890E-1,
1.34204006088543189037E-2,
3.28014464682127739104E-4,
2.89247864745380683936E-6,
6.79019408009981274425E-9,
]
s2pi = 2.50662827463100050242
code = 1
if y > (1.0 - 0.13533528323661269189): # 0.135... = exp(-2)
y = 1.0 - y
code = 0
if y > 0.13533528323661269189:
y = y - 0.5
y2 = y * y
x = y + y * (y2 * polevl(y2, P0, 4) / p1evl(y2, Q0, 8))
x = x * s2pi
return x
x = math.sqrt(-2.0 * math.log(y))
x0 = x - math.log(x) / x
z = 1.0 / x
if x < 8.0: # y > exp(-32) = 1.2664165549e-14
x1 = z * polevl(z, P1, 8) / p1evl(z, Q1, 8)
else:
x1 = z * polevl(z, P2, 8) / p1evl(z, Q2, 8)
x = x0 - x1
if code != 0:
x = -x
return x
result = ndtri((z + 1) / 2.0) / math.sqrt(2)
return result
I think the error in your code is in the for loop over coefficients in the polevl function. If you replace what you have with the function below everything seems to work.
def polevl(x, coefs, N):
ans = 0
power = len(coefs) - 1
for coef in coefs:
ans += coef * x**power
power -= 1
return ans
I have tested it against scipy's implementation with the following code:
import numpy as np
from scipy.special import erfinv
N = 100000
x = np.random.rand(N) - 1.
# Calculate the inverse of the error function
y = np.zeros(N)
for i in range(N):
y[i] = inv_erf(x[i])
assert np.allclose(y, erfinv(x))
sympy? some digging may be needed to see how its implemented internally http://docs.sympy.org/latest/modules/functions/special.html#sympy.functions.special.error_functions.erfinv
from sympy import erfinv
erfinv(0.9).evalf(30)
1.16308715367667425688580351562
Assume the following function:
f(x) = x * cos(x-4)
With x = [-2.5, 2.5] this function crosses 0 at f(0) = 0 and f(-0.71238898) = 0.
This was determined with the following code:
import math
from scipy.optimize import fsolve
def func(x):
return x*math.cos(x-4)
x0 = fsolve(func, 0.0)
# returns [0.]
x0 = fsolve(func, -0.75)
# returns [-0.71238898]
What is the proper way to use fzero (or any other Python root finder) to find both roots in one call? Is there a different scipy function that does this?
fzero reference
I once wrote a module for this task. It's based on chapter 4.3 from the book Numerical Methods in Engineering with Python by Jaan Kiusalaas:
import math
def rootsearch(f,a,b,dx):
x1 = a; f1 = f(a)
x2 = a + dx; f2 = f(x2)
while f1*f2 > 0.0:
if x1 >= b:
return None,None
x1 = x2; f1 = f2
x2 = x1 + dx; f2 = f(x2)
return x1,x2
def bisect(f,x1,x2,switch=0,epsilon=1.0e-9):
f1 = f(x1)
if f1 == 0.0:
return x1
f2 = f(x2)
if f2 == 0.0:
return x2
if f1*f2 > 0.0:
print('Root is not bracketed')
return None
n = int(math.ceil(math.log(abs(x2 - x1)/epsilon)/math.log(2.0)))
for i in range(n):
x3 = 0.5*(x1 + x2); f3 = f(x3)
if (switch == 1) and (abs(f3) >abs(f1)) and (abs(f3) > abs(f2)):
return None
if f3 == 0.0:
return x3
if f2*f3 < 0.0:
x1 = x3
f1 = f3
else:
x2 =x3
f2 = f3
return (x1 + x2)/2.0
def roots(f, a, b, eps=1e-6):
print ('The roots on the interval [%f, %f] are:' % (a,b))
while 1:
x1,x2 = rootsearch(f,a,b,eps)
if x1 != None:
a = x2
root = bisect(f,x1,x2,1)
if root != None:
pass
print (round(root,-int(math.log(eps, 10))))
else:
print ('\nDone')
break
f=lambda x:x*math.cos(x-4)
roots(f, -3, 3)
roots finds all roots of f in the interval [a, b].
Define your function so that it can take either a scalar or a numpy array as an argument:
>>> import numpy as np
>>> f = lambda x : x * np.cos(x-4)
Then pass a vector of arguments to fsolve.
>>> x = np.array([0.0, -0.75])
>>> fsolve(f,x)
array([ 0. , -0.71238898])
In general (i.e. unless your function belongs to some specific class) you can't find all the global solutions - these methods usually do local optimization from given starting points.
However, you can switch math.cos() with numpy.cos() and that will vectorize your function so it can solve for many values at once, e.g. fsolve(func, np.arange(-10,10,0.5)).