Maximum of a Function using scipy.optimize - python

Function with an x-axis of "k value" (k is a three momentum difference) and an y-axis of cross-section. And need to find the maximum y-value of function (i.e. the maximum cross section value of the function) using scipy.optimize.minimize. Equation is a function of kappa (k and kappa are related) where values of a,b,c,d & e are constants. Drawing the graph on Desmos I know the answer I am looking for which is 245.
But issue lies in the fact that code I have written gives an answer much different than answer I am looking for. Code is written below and equation is at the bottom.
from scipy.optimize import minimize
def two_pion_deuteron(k_value):
a, b, c, d, e = 2.855e6, 1.311e1, 2.961e3 , 5.572e0, 1.416e6
Cross_section = (a*(k_value)**b)/((c-np.exp(d*k_value)**2) + e)
return Cross_section
Max_cross_section = minimize(lambda x: -two_pion_deuteron(x), 0, method = 'Nelder-Mead')
print(-Max_cross_section.fun)
Output is 9.49205479500129e+16 which is very far away from real answer of 245.

Change this line to:
Max_cross_section = minimize(two_pion_deuteron, 0, method = 'Nelder-Mead')
Then you will need to find your correct initial guess (x0).
Source

What makes you think that this function has a maximum? It clearly has a singularity at np.exp(d*k_value)**2 == c+e:
x = np.linspace(0, 3, 1000)
plt.plot(x, two_pion_deuteron(x))

Related

Numpy: Solve linear equation system with one unknown + number

I would like to solve a linear equation system in numpy in order to check whether a point lines up with a vector or not.
Given are the following equations for a vector2:
point[x] = vector1[x] + λ * vector2[x]
point[y] = vector1[y] + λ * vector2[y]
Numpys linalg.solve() offers the option to solve two equations in the form:
ax + by = c
by defining the parameters a and b in a numpy.array().
But I can't seem to find a way to deal with equations with one fixed parameter like:
m*x + b = 0
Am I missing a point or do I have to deal with another solution?
Thanks in advance!
Hi I will give it a try to help with this question.
The numpy.linagl.solve says:
Computes the “exact” solution, x, of the well-determined, i.e., full rank, linear matrix equation ax = b.
Note the assumptions made on the matrix!
Lambda the same
If your lambda for the point[x] and point[y] equation should be the same. Then just concatenate all the vectors.
x_new = np.concatenate([x,y])
vec1_new = np.concatenate([vec1_x,vec1_y])
...
Assuming that this will overdetermined your system and probably it will. Meaning you have too many equations and only one parameter to determine (well-determined assumption violated). My approach would be to go with least sqare.
The numpy.linagl.lstsq has a least square method too. Where the equation is y = mx + c is solved. For your case this is y = point[x], x = vector2[x] and c = vector1[x].
This is copied from the numpy.linagl.lstsq example:
x = np.array([0, 1, 2, 3])
y = np.array([-1, 0.2, 0.9, 2.1])
A = np.vstack([x, np.ones(len(x))]).T # => horizontal stack
m, c = np.linalg.lstsq(A, y, rcond=None)[0]
Lambda different
If the lambdas are different. Stack the vector2[x] and vector2[y] horizontal and you have [lambda_1, lambda_2] to find. Probably also more equations then lambds and you will find a least square solution.
Note
Keep in mind that even if you construct your system from a staight line and a fixed lambda. You might need a least square approach due to rounding and numeric differences.
You can solve your equation 2*x + 4 = 0 with sympy:
from sympy.abc import x
from sympy import Eq, solve
eq = Eq(2 * x + 4, 0)
print(solve(eq))

How to fit data to non-ideal diode equation (implicit non-linear function) and retrieve parameters

Plot of scattered data
I need to fit (x,y)-data to an equation with two variables (the x's and the y's) and retrieve the 5 unknown parameters.
I am making a script to treat IV-data (current-voltage) from a simple .txt-file and fit it to an equation, called the non-ideal diode equation; this is an implicit non-linear function.
So far I've opened the file with python, sorted the data into numpy arrays, made a scatter plot of the raw data, and I know how the function to be fitted for should look. I tried defining the equation, and tried the SciPy functions fsolve and curve_fit, but yet without luck (maybe I'm just bad at using them).
What I need is simply to fit the data to the following equation, retrieve the parameters, and plot the actual curve:
y = a - b * (np.exp((x - y * d) / c) - 1) - (x + y * d) / e
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import optimize
OpenFile = pd.read_csv("test.txt", sep="\t", header=0)
FileArray = np.array(OpenFile)
x = FileArray[:, 0]
y = FileArray[:, 1] * 1000.0 / 0.08
plt.scatter(x, y)
def diode(data, a, b, c, d, e):
v, j = data
return a - b * (np.exp((v - j * d) / c) - 1) - (v + j * d) / e - j
### FAILED SCIPY OPTIMIZE ATTEMPT ###
parameters, parameterscovariance = optimize.curve_fit(diode, (x,y), y,
bounds = ([0, 0, 0, 0, 0],
[np.inf, np.inf, np.inf, np.inf, np.inf]),
max_nfev=10000)
plt.plot(x, diode((x,y), parameters[0], parameters[1], parameters[2], parameters[3], parameters[4]))
plt.show()
The code plots the data points, but the diode-equation needs to be optimized, parameters retrieved, and the optimized equation plotted.
EDIT: Now inserted the attempt to make scipy optimize the implicit function
It would be helpful to post the data somewhere and to give the nature of the failure you get. That is, does Python give an exception, does the fit give an error message, or does the fit run to completion but the fit is simply "not good"?
You definitely want to give initial values for fit parameters. It is beyond comprehension that scipy.optimize.curve_fit() allows users to not specify starting values -- it should be an error to not specify initial values. Non-linear curve fitting problems are generally not global optimizations, work by refinement of the starting values, and are often sensitive to initial values (especially when an exponential decay is involved). FWIW, the initial values used when you do not explicitly state the initial values is "1" for all parameters. Is this a good default value? No, it is not.
I also think you have a potentially more serious issue. Your model for "y" is transcendental: "y" depends on "y". I don't recognize the formula you're using, but I can believe the I-V curves for diodes are transcendental. Unless your value for your d parameter is << 1, I think your model will be unstable. You may want to ensure that d is bounded to be << 1, and almost certainly do not want to start with d=1.
That is probably not the sort of answer you were looking for, but I hope it helps get you on the right path.

Solving PDE with methods of lines

I'm trying to solve numerically a parabolic type of Partial differential equation(PDE):
u't=u''xx-u(1-u)(0.3-u)
with Neumann boundary conditions and a step like function as an initial condition.
Expected plot result is here
It may be a bit misleading: red - is an initial step-like function and smooth lines are different iteration in time after (like t=50,100, 150..))
The idea is based on the method of lines: make the space (x) discrete, use finite difference methods (second order is written by the symmetric-centered approximation), standard Python scipy odeint(which can use implicit methods if I'm not mistaken)and then to solve the obtained set of ODEs. I think this is pretty standard way of how to deal with such ordinary PDEs(correct me if I'm wrong).
Below are code snippets I'm running.
In a nutshell, I want to represent the above continuous equation in a matrix based form:
[u]'t=A*[u]+f([u])
where A is a matrix for second order symmetric-cantered approximation of u''xx(after multiplying by [u])
Unfortunately, there is almost nothing qualitatively significant happening with initial conditions after running the code, so I'm confused whether there is a problem with my script or in an approach itself.
Defining the right side u(1-u)(0.3-u):
def f(u, h):
puff = []
for i in u:
puff.append(i*(1. - i)*(0.3 - i))
puff[0] = (h/2.)*puff[0] #this is to include
puff[len(puff)-1] = (h/2.)*puff[len(puff)-1]
return puff
Defining spatial resolution:
h = 10.
x = np.linspace(0., 100., num=h)
This is to define second derivative by the means of finite difference(A*[u]):
diag = [-2]*(len(x))
diag[0] = -h
diag[len(x)-1] = h**2
diag_up = [1]*(len(x)-1)
diag_up[0] = h
diag_down = [1]*(len(x)-1)
diag_down[len(x)-2] = 0
diagonals = [diag, diag_up, diag_down]
A = diags(diagonals, [0, 1, -1]).toarray()
A = A/h**2
Defining step-like initial condition:
init_cond = [0]*(len(x))
for i in range(int(round(0.2*len(init_cond)))):
init_cond[i] = 1
Solving set of ODEs with a time frame defined.
t = np.linspace(0, 100, 200)
def pde(u, t, A, h):
dudt = A.dot(u) - f(u, h)
return dudt
sol = odeint(pde, init_cond, t, args=(A, h))
print(sol[len(sol)-1])
The code may be flawed as I'm rather new to Python. If there is anything I can improve, please let me know. The final result is here
Initial conditions did not change at all. Can anyone give me a hint where I could make a mistake? Thank you in advance!

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

3d integral, python, integration set constrained

I wanted to compute the volume of the intersect of a sphere and infinite cylinder at some distance b, and i figured i would do it using a quick and dirty python script. My requirements are a <1s computation with >3 significant digits.
My thinking was as such:
We place the sphere, with radius R, such that its center is at the origin, and we place the cylinder, with radius R', such that its axis is spanned in z from (b,0,0). We integrate over the sphere, using a step function that returns 1 if we are inside the cylinder, and 0 if not, thus integrating 1 over the set constrained by being inside both sphere and cylinder, i.e. the intersect.
I tried this using scipy.intigrate.tplquad. It did not work out. I think its because of the discontinuity of the step function as i get warnings such the following. Of course, i might just be doing this wrong. Assuming i have not made some stupid mistake, I could attempt to formulate the ranges of the intersect, thus removing the need for the step function, but i figured i might try and get some feedback first. Can anyone spot any mistake, or point towards some simple solution.
Warning: The maximum number of
subdivisions (50) has been achieved.
If increasing the limit yields no
improvement it is advised to analyze
the integrand in order to determine
the difficulties. If the position of
a local difficulty can be
determined (singularity,
discontinuity) one will probably
gain from splitting up the interval
and calling the integrator on the
subranges. Perhaps a special-purpose
integrator should be used.
Code:
from scipy.integrate import tplquad
from math import sqrt
def integrand(z, y, x):
if Rprim >= (x - b)**2 + y**2:
return 1.
else:
return 0.
def integral():
return tplquad(integrand, -R, R,
lambda x: -sqrt(R**2 - x**2), # lower y
lambda x: sqrt(R**2 - x**2), # upper y
lambda x,y: -sqrt(R**2 - x**2 - y**2), # lower z
lambda x,y: sqrt(R**2 - x**2 - y**2), # upper z
epsabs=1.e-01, epsrel=1.e-01
)
R=1
Rprim=1
b=0.5
print integral()
Assuming you are able to translate and scale your data such a way that the origin of the sphere is in [0, 0, 0] and its radius is 1, then a simple stochastic approximation may give you a reasonable answer fast enough. So, something along the lines could be a good starting point:
import numpy as np
def in_sphere(p, r= 1.):
return np.sqrt((p** 2).sum(0))<= r
def in_cylinder(p, c, r= 1.):
m= np.mean(c, 1)[:, None]
pm= p- m
d= np.diff(c- m)
d= d/ np.sqrt(d** 2).sum()
pp= np.dot(np.dot(d, d.T), pm)
return np.sqrt(((pp- pm)** 2).sum(0))<= r
def in_sac(p, c, r_c):
return np.logical_and(in_sphere(p), in_cylinder(p, c, r_c))
if __name__ == '__main__':
n, c= 1e6, [[0, 1], [0, 1], [0, 1]]
p= 2* np.random.rand(3, n)- 2
print (in_sac(p, c, 1).sum()/ n)* 2** 3
Performing a triple adaptive numerical integrations on a discontinuous function that is constant over two domains is a terribly poor idea, especially if you wish to see either speed or accuracy.
I would suggest a far better idea is to reduce the problem analytically.
Align the cylinder with an axis, by transformation. This translates the sphere to some point that is not at the origin.
Now, find the limits of intersection of the sphere with the cylinder along that axis.
Integrate over that axis variable. The area of intersection at any fixed value along the axis is simply the area of intersection of two circles, which in turn is simply computable using trigonometry and a little effort.
In the end, you will have an exact result, with almost no computation time needed.
I solved it using a simple MC integration, as suggested by eat, but my implementation was to slow. My requirements had increased. I therefore reformulated the problem mathematically, as suggested by woodchips.
Basically i formulated the limits of x as a function of z and y, and y as a function of z. Then i, in essence, integrated f(z,y,z)=1 over the intersection, using the limits. I did this because of the speed increase, allowing me to plot volume vs b, and because it allows me to integrate more complex functions with relative minor modification.
I include my code in case anyone is interested.
from scipy.integrate import quad
from math import sqrt
from math import pi
def x_max(y,r):
return sqrt(r**2-y**2)
def x_min(y,r):
return max(-sqrt(r**2 - y**2), -sqrt(R**2 - y**2) + b)
def y_max(r):
if (R<b and b-R<r) or (R>b and b-R>r):
return sqrt( R**2 - (R**2-r**2+b**2)**2/(4.*b**2) )
elif r+R<b:
return 0.
else: #r+b<R
return r
def z_max():
if R>b:
return R
else:
return sqrt(2.*b*R - b**2)
def delta_x(y, r):
return x_max(y,r) - x_min(y,r)
def int_xy(z):
r = sqrt(R**2 - z**2)
return quad(delta_x, 0., y_max(r), args=(r))
def int_xyz():
return quad(lambda z: int_xy(z)[0], 0., z_max())
R=1.
Rprim=1.
b=0.5
print 4*int_xyz()[0]
First off: You can calculate the volume of the intersection by hand. If you don't want to (or can't) do that, here's an alternative:
I'd generate a tetrahedral mesh for the domain and then add up the cell volumes. An example with pygalmesh and meshplex (both authored by myself):
import pygalmesh
import meshplex
import numpy
ball = pygalmesh.Ball([0, 0, 0], 1.0)
cyl = pygalmesh.Cylinder(-1, 1, 0.7, 0.1)
u = pygalmesh.Intersection([ball, cyl])
mesh = pygalmesh.generate_mesh(u, cell_size=0.05, edge_size=0.1)
points = mesh.points
cells = mesh.cells["tetra"]
# kick out unused vertices
uvertices, uidx = numpy.unique(cells, return_inverse=True)
cells = uidx.reshape(cells.shape)
points = points[uvertices]
mp = meshplex.MeshTetra(points, cells)
print(sum(mp.cell_volumes))
This gives you
and prints 2.6567890958740463 as volume. Decrease cell or edge sizes for higher precision.

Categories