Given this function:
def f(x):
return (1-x**2)**m * ((1-x)/2)**n
where m and n are constants, let's say both 0.5 for the sake of an example.
I'm trying to use functions from scipy.optimize to solve for x given a value of y. I'm only interested in xvalues from -1 to 1. Plotting the function with
x = numpy.arange(0, 1, 0,1)
matplotlib.pyplot.plot(x, f(x))
shows that the function is a kind of distorted parabola covering the range about 0 to 0.65. So lets try solving it for y = 0.3:
def f(x):
return (1 - x**2)**m * ((1-x)/2)**n - 0.3
print(scipy.optimize.newton_krylov(f, 0.5))
0.6718791645800665
This looks about right for one of the possible solutions. But there are two. The second should be around -0.9. Try what I might for an initial guess, I can't get it to find this second solution. The Newton-Krylov method gives no convergence at all for xin < 0 but none of the solvers can find this second solution.
Am I missing something? What am I doing wrong?
The method converges at least for x=-0.9:
scipy.optimize.newton_krylov(f, -0.9)
#array(-0.9527983).
It diverges for x approximately in [-0.85...0.06].
This is because, newton_krylov uses the Jacobian of the function. This makes it a gradient decent method consequently your solutions always converge to a local minima. Furthermore, because your function is parabolic you have a very interesting option!
The first is to find the maxima of f(x) and split your search domain into to. Next you can make an initial guess in each domain and solve with newton_krylov.
def f(x):
# Here is our function
return (1-x**2)**m * ((1-x)/2)**n
def minf(x):
# Here is where we find an optima and split the domain
return -f(x)
def fy(x):
# This is where you want your y value target defined
return abs(f(x) - .3)
if __name__ == "__main__":
x = numpy.arange(-1., 1., 1e-3, dtype=float)
# pyplot.plot(x, f(x))
# pyplot.show()
minx = minimize(minf, 0.0)['x']
# Make an initial guess in each domain
a1 = minx - 1.6 * minx
a2 = minx + 1.6 * minx
print(newton_krylov(fy, a1))
print(newton_krylov(fy, a2))
The output then is:
[0.67187916]
[-0.95279992]
Related
I want to approximate the solutions of dy/dx = -x +1, with eulers method on the interval from 0 to 2. I'm using this code
def f(x):
return -x+1 # insert any function here
x0 = 1 # Initial slope #
dt = 0.1 # time step
T = 2 # ...from...to T
t = np.linspace(0, T, int(T/dt) + 1) # divide the interval from 0 to 2 by dt
x = np.zeros(len(t))
x[0] = x0 # value at 1 is the initial slope
for i in range(1, len(t)): # apply euler method
x[i] = x[i-1] + f(x[i-1])*dt
plt.figure() # plot the result
plt.plot(t,x,color='blue')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.show()
Can I use this code to approximate the solutions of any function on any interval? It's hard to see whether this actually works, because I don't know how to plot the actual solution ( -1/2x^2 + x ) along side the approximation.
It would probably help if you consistently used the same variable names for the same role. Per your output, the solution is y(t). Thus your differential equation should be dy(t)/dt = f(t,y(t)). This would then give an implementation for the slope function and its exact solution
def f(t,y): return 1-t
def exact_y(t,t0,y0): return y0+0.5*(1-t0)**2-0.5*(1-t)**2
Then implement the Euler loop also as a separate function, keeping out problem specific details as much as possible
def Eulerint(f,t0,y0,te,dt):
t = np.arange(t0,te+dt,dt)
y = np.zeros(len(t))
y[0] = y0
for i in range(1, len(t)): # apply euler method
y[i] = y[i-1] + f(t[i-1],y[i-1])*dt
return t,y
Then plot the solutions as
y0,T,dt = 1,2,0.1
t,y = Eulerint(f,0,y0,T,dt)
plt.plot(t,y,color='blue')
plt.plot(t,exact_y(t,0,y0),color='orange')
You can just plot the actual solution by using:
def F(x):
return -0.5*x+x
# some code lines
plt.plot(t,x,color='blue')
plt.plot(t,F(t),color='orange')
But please note that the actual solution (-1/2x+x = 1/2x) does not correspond to your slope f(x) and will show a different solution.
The *real slope f(x) of the actual solution (-1/2x+x = 1/2x) is just simply f(x)=1/2
could somebody please explain how to do a gradient descent problem WITHOUT the context of the cost function? I have seen countless tutorials that explain gradient descent using the cost function, but I really don't understand how it works in a more general sense.
I am given a 3D function:
z = 3*((1-xx)2) * np.exp(-(xx2) - (yy+1)2) \
- 10*(xx/5 - xx3 - yy5) * np.exp(-xx2 - yy2)- (1/3)* np.exp(-(xx+1)**2 - yy2)
And I am asked to:
Code a simple gradient algorithm. Set the parameters as follows:
learning rate = step size: 0.1
Max number of iterations: 20
Stopping criterion: 0.0001 (Your iterations should stop when your gradient is smaller than the threshold)
Then start your algorithm at
(x0 = 0.5, y0 = -0.5)
(x0 = -0.3, y0 = -0.3)
I have seen this piece of code floating around wherever gradient descent is talked about:
def update_weights(m, b, X, Y, learning_rate):
m_deriv = 0
b_deriv = 0
N = len(X)
for i in range(N):
# Calculate partial derivatives
# -2x(y - (mx + b))
m_deriv += -2*X[i] * (Y[i] - (m*X[i] + b))
# -2(y - (mx + b))
b_deriv += -2*(Y[i] - (m*X[i] + b))
# We subtract because the derivatives point in direction of steepest ascent
m -= (m_deriv / float(N)) * learning_rate
b -= (b_deriv / float(N)) * learning_rate
return m, b
enter code here
But I don't understand how to use it for my problem. How does my function fit in there? What do I adjust instead of m and b? I'm very very confused.
Thank you.
Gradient Descent is optimization algorithm for finding the minimum of a function.
Very simplified view
Lets start with a 1D function y = f(x)
Lets start at an arbitrary value of x and find the gradient (slope) of f(x).
If the slope is decreasing at x then it means we have to go further toward (right of number line) x (for reaching the minimum)
If the slope is increasing at x then it means we have to go away from (left of number line) x
We can get the slope by taking the derivative of the function. The derivative is -ve if the slop is decreasing and +ve if the slope is increasing
So we can start at some arbitrary value of x and slowly move toward the minimum using the derivatives at that value of x. How slowly we are moving is determined by the learning rate or step size. so we have the update rule
x = x - df_dx*lr
We can see that if the slope is decreasing the derivative (df_dx) is -ve and x is increasing and so x is moving to further right. On the other hand if slope is increasing the df_dx is +ve which decreases x and so we are moving toward left.
We continue this either for some large number of times or until the derivative is very small
Multivariate function z = f(x,y)
The same logic as above applies except now we take the partial derivatives instead of derivative.
Update rule is
x = x - dpf_dx*lr
y = y - dpf_dy*lr
Where dpf_dx is the partial derivative of f with respect to x
The above algorithm is called the gradient decent algorithm. In Machine learning the f(x,y) is a cost/loss function whose minimum we are interested in.
Example
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d.axes3d import Axes3D
from pylab import meshgrid
from scipy.optimize import fmin
import math
def z_func(a):
x, y = a
return ((x-1)**2+(y-2)**2)
x = np.arange(-3.0,3.0,0.1)
y = np.arange(-3.0,3.0,0.1)
X,Y = meshgrid(x, y) # grid of point
Z = z_func((X, Y)) # evaluation of the function on the grid
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1,linewidth=0, antialiased=False)
plt.show()
The min of z_func is at (1,2). This can be verified using the fmin function of scipy
fmin(z_func,np.array([10,10]))
Now lets write our own gradient decent algorithm to find the min of z_func
def gradient_decent(x,y,lr):
while True:
d_x = 2*(x-1)
d_y = 2*(y-2)
x -= d_x*lr
y -= d_y*lr
if d_x < 0.0001 and d_y < 0.0001:
break
return x,y
print (gradient_decent(10,10,0.1)
We are starting at some arbitrary value x=10 and y=10 and a learning rate of 0.1. The above code prints 1.000033672997724 2.0000299315535326 which is correct.
So if you have a continuous differentiable convex function, to find its optimal (which is minimal for a convex) all you have to do is find the partial derivatives of the function with respect to each variable and use the update rule mentioned above. Repeat the steps until the gradients are small which mean we have reached the minima for a convex function.
If the function is not convex, we might get stuck in a local optima.
I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided
u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5.
def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
I do not know the SciPy Library very well, but regarding the example in the documentation I would try something like this:
def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
You have a couple of issues here, and the step function is only a small part of it. You can define a step function with a simple lambda and then simply capture it from the outer scope without even passing it to your function. Because sometimes that won't be the case, we'll be explicit and pass it.
Your next problem is the order of arguments in the function to integrate. As per the docs (y,t,...). Ie, First the function, then the time vector, then the other args arguments. So for the first part we get:
u = lambda t : 2 if t>5 else 0
def model(x,t,u):
dxdt = (-x+u(t))/2
return dxdt
x0 = 0
y0 = 0
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
Moving to the next part, the trouble is, you can't feed x as an arg to y because it's a vector of values for x(t) for particular times and so y+x doesn't make sense in the function as you wrote it. You can follow your intuition from math class if you pass an x function instead of the x values. Doing so requires that you interpolate the x values using the specific time values you are interested in (which scipy can handle, no problem):
from scipy.interpolate import interp1d
xfunc = interp1d(t.flatten(),x.flatten(),fill_value="extrapolate")
#flatten cuz the shape is off , extrapolate because odeint will go out of bounds
def model2(y,t,x):
dydt = -(y+x(t))/5
return dydt
y = odeint(model2,y0,t,args=(xfunc,))
Then you get:
#Sven's answer is more idiomatic for vector programming like scipy/numpy. But I hope my answer provides a clearer path from what you know already to a working solution.
I have a function which is actually a call to another program (some Fortran code). When I call this function (run_moog) I can parse 4 variables, and it returns 6 values. These values should all be close to 0 (in order to minimize). However, I combined them like this: np.sum(results**2). Now I have a scalar function. I would like to minimize this function, i.e. get the np.sum(results**2) as close to zero as possible.
Note: When this function (run_moog) takes the 4 input parameters, it creates an input file for the Fortran code that depends on these parameters.
I have tried several ways to optimize this from the scipy docs. But none works as expected. The minimization should be able to have bounds on the 4 variables. Here is an attempt:
from scipy.optimize import minimize # Tried others as well from the docs
x0 = 4435, 3.54, 0.13, 2.4
bounds = [(4000, 6000), (3.00, 4.50), (-0.1, 0.1), (0.0, None)]
a = minimize(fun_mmog, x0, bounds=bounds, method='L-BFGS-B') # I've tried several different methods here
print a
This then gives me
status: 0
success: True
nfev: 5
fun: 2.3194639999999964
x: array([ 4.43500000e+03, 3.54000000e+00, 1.00000000e-01,
2.40000000e+00])
message: 'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
jac: array([ 0., 0., -54090399.99999981, 0.])
nit: 0
The third parameter changes slightly, while the others are exactly the same. Also there have been 5 function calls (nfev) but no iterations (nit). The output from scipy is shown here.
Couple of possibilities:
Try COBYLA. It should be derivative-free, and supports inequality constraints.
You can't use different epsilons via the normal interface; so try scaling your first variable by 1e4. (Divide it going in, multiply coming back out.)
Skip the normal automatic jacobian constructor, and make your own:
Say you're trying to use SLSQP, and you don't provide a jacobian function. It makes one for you. The code for it is in approx_jacobian in slsqp.py. Here's a condensed version:
def approx_jacobian(x,func,epsilon,*args):
x0 = asfarray(x)
f0 = atleast_1d(func(*((x0,)+args)))
jac = zeros([len(x0),len(f0)])
dx = zeros(len(x0))
for i in range(len(x0)):
dx[i] = epsilon
jac[i] = (func(*((x0+dx,)+args)) - f0)/epsilon
dx[i] = 0.0
return jac.transpose()
You could try replacing that loop with:
for (i, e) in zip(range(len(x0)), epsilon):
dx[i] = e
jac[i] = (func(*((x0+dx,)+args)) - f0)/e
dx[i] = 0.0
You can't provide this as the jacobian to minimize, but fixing it up for that is straightforward:
def construct_jacobian(func,epsilon):
def jac(x, *args):
x0 = asfarray(x)
f0 = atleast_1d(func(*((x0,)+args)))
jac = zeros([len(x0),len(f0)])
dx = zeros(len(x0))
for i in range(len(x0)):
dx[i] = epsilon
jac[i] = (func(*((x0+dx,)+args)) - f0)/epsilon
dx[i] = 0.0
return jac.transpose()
return jac
You can then call minimize like:
minimize(fun_mmog, x0,
jac=construct_jacobian(fun_mmog, [1e0, 1e-4, 1e-4, 1e-4]),
bounds=bounds, method='SLSQP')
It sounds like your target function doesn't have well-behaving derivatives. The line in the output jac: array([ 0., 0., -54090399.99999981, 0.]) means that changing only the third variable value is significant. And because the derivative w.r.t. to this variable is virtually infinite, there is probably something wrong in the function. That is also why the third variable value ends up in its maximum.
I would suggest that you take a look at the derivatives, at least in a few points in your parameter space. Compute them using finite differences and the default step size of SciPy's fmin_l_bfgs_b, 1e-8. Here is an example of how you could compute the derivates.
Try also plotting your target function. For instance, keep two of the parameters constant and let the two others vary. If the function has multiple local optima, you shouldn't use gradient-based methods like BFGS.
How difficult is it to get an analytical expression for the gradient? If you have that you can then approximate the product of Hessian with a vector using finite difference. Then you can use other optimization routines available.
Among the various optimization routines available in SciPy, the one called TNC (Newton Conjugate Gradient with Truncation) is quite robust to the numerical values associated with the problem.
The Nelder-Mead Simplex Method (suggested by Cristián Antuña in the comments above) is well known to be a good choice for optimizing (posibly ill-behaved) functions with no knowledge of derivatives (see Numerical Recipies In C, Chapter 10).
There are two somewhat specific aspects to your question. The first is the constraints on the inputs, and the second is a scaling problem. The following suggests solutions to these points, but you might need to manually iterate between them a few times until things work.
Input Constraints
Assuming your input constraints form a convex region (as your examples above indicate, but I'd like to generalize it a bit), then you can write a function
is_in_bounds(p):
# Return if p is in the bounds
Using this function, assume that the algorithm wants to move from point from_ to point to, where from_ is known to be in the region. Then the following function will efficiently find the furthermost point on the line between the two points on which it can proceed:
from numpy.linalg import norm
def progress_within_bounds(from_, to, eps):
"""
from_ -- source (in region)
to -- target point
eps -- Eucliedan precision along the line
"""
if norm(from_, to) < eps:
return from_
mid = (from_ + to) / 2
if is_in_bounds(mid):
return progress_within_bounds(mid, to, eps)
return progress_within_bounds(from_, mid, eps)
(Note that this function can be optimized for some regions, but it's hardly worth the bother, as it doesn't even call your original object function, which is the expensive one.)
One of the nice aspects of Nelder-Mead is that the function does a series of steps which are so intuitive. Some of these points can obviously throw you out of the region, but it's easy to modify this. Here is an implementation of Nelder Mead with modifications made marked between pairs of lines of the form ##################################################################:
import copy
'''
Pure Python/Numpy implementation of the Nelder-Mead algorithm.
Reference: https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
'''
def nelder_mead(f, x_start,
step=0.1, no_improve_thr=10e-6, no_improv_break=10, max_iter=0,
alpha = 1., gamma = 2., rho = -0.5, sigma = 0.5):
'''
#param f (function): function to optimize, must return a scalar score
and operate over a numpy array of the same dimensions as x_start
#param x_start (numpy array): initial position
#param step (float): look-around radius in initial step
#no_improv_thr, no_improv_break (float, int): break after no_improv_break iterations with
an improvement lower than no_improv_thr
#max_iter (int): always break after this number of iterations.
Set it to 0 to loop indefinitely.
#alpha, gamma, rho, sigma (floats): parameters of the algorithm
(see Wikipedia page for reference)
'''
# init
dim = len(x_start)
prev_best = f(x_start)
no_improv = 0
res = [[x_start, prev_best]]
for i in range(dim):
x = copy.copy(x_start)
x[i] = x[i] + step
score = f(x)
res.append([x, score])
# simplex iter
iters = 0
while 1:
# order
res.sort(key = lambda x: x[1])
best = res[0][1]
# break after max_iter
if max_iter and iters >= max_iter:
return res[0]
iters += 1
# break after no_improv_break iterations with no improvement
print '...best so far:', best
if best < prev_best - no_improve_thr:
no_improv = 0
prev_best = best
else:
no_improv += 1
if no_improv >= no_improv_break:
return res[0]
# centroid
x0 = [0.] * dim
for tup in res[:-1]:
for i, c in enumerate(tup[0]):
x0[i] += c / (len(res)-1)
# reflection
xr = x0 + alpha*(x0 - res[-1][0])
##################################################################
##################################################################
xr = progress_within_bounds(x0, x0 + alpha*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
rscore = f(xr)
if res[0][1] <= rscore < res[-2][1]:
del res[-1]
res.append([xr, rscore])
continue
# expansion
if rscore < res[0][1]:
xe = x0 + gamma*(x0 - res[-1][0])
##################################################################
##################################################################
xe = progress_within_bounds(x0, x0 + gamma*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
escore = f(xe)
if escore < rscore:
del res[-1]
res.append([xe, escore])
continue
else:
del res[-1]
res.append([xr, rscore])
continue
# contraction
xc = x0 + rho*(x0 - res[-1][0])
##################################################################
##################################################################
xc = progress_within_bounds(x0, x0 + rho*(x0 - res[-1][0]), prog_eps)
##################################################################
##################################################################
cscore = f(xc)
if cscore < res[-1][1]:
del res[-1]
res.append([xc, cscore])
continue
# reduction
x1 = res[0][0]
nres = []
for tup in res:
redx = x1 + sigma*(tup[0] - x1)
score = f(redx)
nres.append([redx, score])
res = nres
Note This implementation is GPL, which is either fine for you or not. It's extremely easy to modify NM from any pseudocode, though, and you might want to throw in simulated annealing in any case.
Scaling
This is a trickier problem, but jasaarim has made an interesting point regarding that. Once the modified NM algorithm has found a point, you might want to run matplotlib.contour while fixing a few dimensions, in order to see how the function behaves. At this point, you might want to rescale one or more of the dimensions, and rerun the modified NM.
–
I'm trying to implement euler's method to approximate the value of e in python. This is what I have so far:
def Euler(f, t0, y0, h, N):
t = t0 + arange(N+1)*h
y = zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] + h*f(t[n], y[n])
f = (1+(1/N))^N
return y
However, when I try to call the function, I get the error "ValueError: shape <= 0". I suspect this has something to do with how I defined f? I tried inputting f directly when euler is called, but gave me errors related to variables not being defined. I also tried defining f as its own function, which gave me a division by 0 error.
def f(N):
for n in range(N):
return (1+(1/n))^n
(not sure if N was the appropriate variable to use here...)
The formula you are trying to use is not Euler's method, but rather the exact value of e as n approaches infinity wiki,
$n = \lim_{n\to\infty} (1 + \frac{1}{n})^n$
Euler's method is used to solve first order differential equations.
Here are two guides that show how to implement Euler's method to solve a simple test function: beginner's guide and numerical ODE guide.
To answer the title of this post, rather than the question you are asking, I've used Euler's method to solve usual exponential decay:
$\frac{dN}{dt} = -\lambda N$
Which has the solution,
$N(t) = N_0 e^{-\lambda t}$
Code:
import numpy as np
import matplotlib.pyplot as plt
from __future__ import division
# Concentration over time
N = lambda t: N0 * np.exp(-k * t)
# dN/dt
def dx_dt(x):
return -k * x
k = .5
h = 0.001
N0 = 100.
t = np.arange(0, 10, h)
y = np.zeros(len(t))
y[0] = N0
for i in range(1, len(t)):
# Euler's method
y[i] = y[i-1] + dx_dt(y[i-1]) * h
max_error = abs(y-N(t)).max()
print 'Max difference between the exact solution and Euler's approximation with step size h=0.001:'
print '{0:.15}'.format(max_error)
Output:
Max difference between the exact solution and Euler's approximation with step size h=0.001:
0.00919890254720457
Note: I'm not sure how to get LaTeX displaying properly.
Are you sure you are not trying to implement the Newton's method? Because Newton's method is used to approximate the roots.
In case you decide to go with Newton's method, here is a slightly changed version of your code that approximates the square-root of 2. You can change f(x) and fp(x) with the function and its derivative you use in your approximation to the thing you want.
import numpy as np
def f(x):
return x**2 - 2
def fp(x):
return 2*x
def Newton(f, y0, N):
y = np.zeros(N+1)
y[0] = y0
for n in range(N):
y[n+1] = y[n] - f(y[n])/fp(y[n])
return y
print Newton(f, 1, 10)
gives
[ 1. 1.5 1.41666667 1.41421569 1.41421356 1.41421356
1.41421356 1.41421356 1.41421356 1.41421356 1.41421356]
which are the initial value and the first ten iterations to the square-root of two.
Besides this a big problem was the usage of ^ instead of ** for powers which is a legal but a totally different (bitwise) operation in python.