I'm receiving an error with this simple code, the problem is that the error only appears with one of the equations that I need (78 * x**0.3 * y**0.8 - 376).
The error : invalid value encountered in double_scalars ; F[0] = 78 * x**0.3 * y**0.8 - 376
If I erase * y**0.8 from the first equation, the code runs perfectly, but obviously it doesn't work for me.
Code:
import numpy as np
from scipy.optimize import fsolve
def Funcion(z):
x = z[0]
y = z[1]
F = np.empty((2))
F[0] = 78 * x**0.3 * y**0.8 - 376
F[1] = 77 * x**0.5 * y - 770
return F
zGuess = np.array([1,1])
z = fsolve(Funcion,zGuess)
print(z)
F[0] will be complex if y is negative. fsolve doesn't support complex root finding.
You need to solve the nonlinear equation system F(x,y) = 0 subject to x, y >= 0, which is equivalent to minimize the euclidean norm ||F(x,y)|| s.t. x,y >= 0. To solve this constrained optimization problem, you can use scipy.optimize.minimize as follows:
import numpy as np
from scipy.optimize import minimize
def Funcion(z):
x = z[0]
y = z[1]
F = np.empty((2))
F[0] = 78 * x**0.3 * y**0.8 - 376
F[1] = 77 * x**0.5 * y - 770
return F
# initial point
zGuess = np.array([1.0, 1.0])
# bounds x, y >= 0
bounds = [(0, None), (0, None)]
# Solve the constrained optimization problem
z = minimize(lambda z: np.linalg.norm(Funcion(z)), x0=zGuess, bounds=bounds)
# Print the solution
print(z.x)
Related
I have the following negative quadratic equation
-0.03402645959398278x^{2}+156.003469x-178794.025
I want to know if there is a straight way (using numpy/scipy libraries or any other) to get the value of x when the slope of the derivative is zero (the maxima). I'm aware I could:
change the sign of the equation and apply the scipy.optimize.minima method or
using the derivative of the equation so I can get the value when the slope is zero
For instance:
from scipy.optimize import minimize
quad_eq = np.poly1d([-0.03402645959398278, 156.003469, -178794.025])
############SCIPY####################
neg_quad_eq = np.poly1d(np.negative(quad_eq))
fit = minimize(neg_quad_eq, x0=15)
slope_zero_neg = fit.x[0]
maxima = np.polyval(quad_eq, slope_zero_neg)
print(maxima)
##################numpy######################
import numpy as np
first_dev = np.polyder(quad_eq)
slope_zero = first_dev.r
maxima = np.polyval(quad_eq, slope_zero)
print(maxima)
Is there any straight way to get the same result?
print(maxima)
You don't need all that code... The first derivative of a x^2 + b x + c is 2a x + b, so solving 2a x + b = 0 for x yields x = -b / (2a) that is actually the maximum you are searching for
import numpy as np
import matplotlib.pyplot as plt
def func(x, a=-0.03402645959398278, b=156.003469, c=-178794.025):
result = a * x**2 + b * x + c
return result
def func_max(a=-0.03402645959398278, b=156.003469, c=-178794.025):
maximum_x = -b / (2 * a)
maximum_y = a * maximum_x**2 + b * maximum_x + c
return maximum_x, maximum_y
x = np.linspace(-50000, 50000, 100)
y = func(x)
mx, my = func_max()
print('maximum:', mx, my)
maximum: 2292.384674478263 15.955750522436574
and verify
plt.plot(x, y)
plt.axvline(mx, color='r')
plt.axhline(my, color='r')
I am trying to fit a function cav=p(T,x) with the condition that the derivation after x of cav for constant T is always positive dp/dx (for constant T) > 0. The data of x, T and p are from excel sheets. z are my coefficients I am trying to get.
I've used the solution from here Fitting with constraints on derivative Python as a template. Here is my code as it is right now with the providing error message:
import pandas as pd
import os
from scipy.optimize import minimize
import numpy as np
df = pd.read_excel(os.path.join(os.path.dirname(__file__), "./data.xlsx"))
T = np.array(df['T'], dtype=float)
x = np.array(df['x'], dtype=float)
p = np.array(df['p'], dtype=float)
p_s = 67
def cav(z,T,x): #my function
return x * p_s + x * (1 - x) * (z[0] + z[1] * T + z[2] * T ** 2 + z[3] * x + z[4] * x * T + z[5] * x * T ** 2) * p_s
def resid(p,T,x):
return ((p-cav(T,x))**2).sum()
def constr(z):
return np.gradient(cav(z,x,T))
con1 = {'type': 'ineq', 'fun': constr}
z0 = np.array([0,0,0,0,0,0], dtype=float)
res = minimize(resid,z0, args=(p,T,x), method='cobyla',options={'maxiter':50000}, constraints=con1)
And the Error:
TypeError: resid() takes 3 positional arguments but 4 were given
I don't understand what exactly do I have to put in as arguments for the three def. Thanks for any help!
The error is because you are passing 3 arguments to resid in addition to the initial guess z0.
Thus, the line which has to change is:
res = minimize(resid,z0, args=(T,x), method='cobyla',options={'maxiter':50000}, constraints=con1)
Another issue in your code is:
def resid(p,T,x):
return ((p-cav(T,x))**2).sum()
Your method cav takes three arguments but you pass only two. So this probably should be changed to:
def resid(p,T,x):
return ((p-cav(p,T,x))**2).sum()
I am trying to use the Python interpolation function to get the value y for a given x but I am getting the error "raise ValueError("x and y arrays must be equal in length along along interpolation axis" even though my arrays have both equal size and shape (according to what I get when I use .shape in my code). I am quite new to programming so I don't know how to check what else could be different in my arrays. Here is my code:
s = []
def slowroll(y, t):
phi, dphi, a = y
h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2))
da = h*a
ddphi = -3.*h*dphi - phi
return [dphi,ddphi,da]
phi_ini = 18.
dphi_ini = -0.1
init_y = [phi_ini,dphi_ini,1.]
h_ini =np.sqrt(1/3. * (1/2. * dphi_ini**2. + 1/2.*phi_ini**2.))
t=np.linspace(0.,20.,100.)
from scipy.integrate import odeint
sol = odeint(slowroll, init_y, t)
phi = sol[:,0]
dphi = sol[:,1]
a=sol[:,2]
n=np.log(a)
h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2))
s.extend(a*h)
x = np.asarray(s)
y = np.asarray(t)
F = interp1d(y, x, kind='cubic')
print F(7.34858263)
After adding in the required imports, I've been unable to duplicate your error with version 2.7.12. What python version are you running?
import numpy as np
from scipy.interpolate import interp1d
s = []
def slowroll(y, t):
phi, dphi, a = y
h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2))
da = h*a
ddphi = -3.*h*dphi - phi
return [dphi,ddphi,da]
phi_ini = 18.
dphi_ini = -0.1
init_y = [phi_ini,dphi_ini,1.]
h_ini =np.sqrt(1/3. * (1/2. * dphi_ini**2. + 1/2.*phi_ini**2.))
t=np.linspace(0.,20.,100.)
from scipy.integrate import odeint
sol = odeint(slowroll, init_y, t)
phi = sol[:,0]
dphi = sol[:,1]
a=sol[:,2]
n=np.log(a)
h = np.sqrt(1/3. * (1/2. * dphi**2 + 1/2.*phi**2))
s.extend(a*h)
x = np.asarray(s)
y = np.asarray(t)
F = interp1d(y, x, kind='cubic')
print F(7.34858263)
Output:
2.11688518961e+20
I have seen this amazing example.
But I need to solve system with boundaries on X and F, for example:
f1 = x+y^2 = 0
f2 = e^x+ xy = 0
-5.5< x <0.18
2.1< y < 10.6
# 0.15< f1 <20.5 - not useful for this example
# -10.5< f2 < -0.16 - not useful for this example
How could I set this boundary constrains to fsolve() of scipy? Or may be there is some other method?
Would You give me a Simple code example?
I hope this will serve you as a starter. It was all there.
import numpy as np
from scipy.optimize import minimize
def my_fun(z):
x = z[0]
y = z[1]
f = np.zeros(2)
f[0] = x + y ** 2
f[1] = np.exp(x) + x * y
return np.dot(f,f)
def my_cons(z):
x = z[0]
y = z[1]
f = np.zeros(4)
f[0] = x + 5.5
f[1] = 0.18 - x
f[2] = y - 2.1
f[3] = 10.6 - y
return f
cons = {'type' : 'ineq', 'fun': my_cons}
res = minimize(my_fun, (2, 0), method='SLSQP',\
constraints=cons)
res
status: 0
success: True
njev: 7
nfev: 29
fun: 14.514193585986144
x: array([-0.86901099, 2.1 ])
message: 'Optimization terminated successfully.'
jac: array([ -2.47001648e-04, 3.21871972e+01, 0.00000000e+00])
nit: 7
EDIT: As a response to the comments: If your function values f1 and f2 are not zero you just have to rewrite the equations e.g:
f1 = -6 and f2 = 3
Your function to minimize will be:
def my_fun(z):
x = z[0]
y = z[1]
f = np.zeros(2)
f[0] = x + y ** 2 + 6
f[1] = np.exp(x) + x * y -3
return np.dot(f,f)
It depends on the system, but here you can simply check the constraints afterwards.
First solve your nonlinear system to get one/none/several solutions of the form (x,y). Then check which, if any, of these solutions, satisfy the constraints.
How can I do a maximum likelihood regression using scipy.optimize.minimize? I specifically want to use the minimize function here, because I have a complex model and need to add some constraints. I am currently trying a simple example using the following:
from scipy.optimize import minimize
def lik(parameters):
m = parameters[0]
b = parameters[1]
sigma = parameters[2]
for i in np.arange(0, len(x)):
y_exp = m * x + b
L = sum(np.log(sigma) + 0.5 * np.log(2 * np.pi) + (y - y_exp) ** 2 / (2 * sigma ** 2))
return L
x = [1,2,3,4,5]
y = [2,3,4,5,6]
lik_model = minimize(lik, np.array([1,1,1]), method='L-BFGS-B', options={'disp': True})
When I run this, convergence fails. Does anyone know what is wrong with my code?
The message I get running this is 'ABNORMAL_TERMINATION_IN_LNSRCH'. I am using the same algorithm that I have working using optim in R.
Thank you Aleksander. You were correct that my likelihood function was wrong, not the code. Using a formula I found on wikipedia I adjusted the code to:
import numpy as np
from scipy.optimize import minimize
def lik(parameters):
m = parameters[0]
b = parameters[1]
sigma = parameters[2]
for i in np.arange(0, len(x)):
y_exp = m * x + b
L = (len(x)/2 * np.log(2 * np.pi) + len(x)/2 * np.log(sigma ** 2) + 1 /
(2 * sigma ** 2) * sum((y - y_exp) ** 2))
return L
x = np.array([1,2,3,4,5])
y = np.array([2,5,8,11,14])
lik_model = minimize(lik, np.array([1,1,1]), method='L-BFGS-B')
plt.scatter(x,y)
plt.plot(x, lik_model['x'][0] * x + lik_model['x'][1])
plt.show()
Now it seems to be working.
Thanks for the help!