Integrate multivariable function with singularities using scipy.integrate.nquad - python

I'm trying to integrate the function
def func(x, y):
return 1 / np.sqrt(x**2 + y**2)
with both x and y between -1 and 1 for which the numerical solution is roughly 7.05.
The problem is that there is a zero at the denominator for x=y=0 which I cannot understand how to handle it. I've been trying with the option points of integrate.nquad:
integrate.nquad(func, [[-1,1], [-1,1]], points = [[0],[0]])
but it doesn't seem the correct syntax. I found many examples (which I can reproduce) of integrating one variable function with singularities using the option point, but I don't understand how (and if it's possible) to generalize this with function of more than one variable. Looking at the manual doesn't help me unfortunately...

Use
nquad(func, [[-1, 1], [-1, 1]], opts=[{'points': [0,]}, {'points': [0,]}])

Related

How to calculate the integral from the square of defined function in Python?

I have a problem with calculating the integral in Python.
So, I have defined function:
def function_f(x):
if x>=0 and x<=1:
return ((1/3)**x)
else:
return 0
I know to calculate the simple one (without square), with this code:
from scipy import integrate
integrate.quad((function_f), 0, 1)
But the problem is that I need to calculate the integral from the square of this function, so I need integral from (function_f)^2.
I hope that you have some advices how I can do that.
I'm not used to scipy.integrate, but based on your explanations you should be able to do :
integrate.quad(lambda x:function_f(x)**2, 0, 1)

Sympy function derivatives and sets of equations

I'm working with nonlinear systems of equations. These systems are generally a nonlinear vector differential equation.
I now want to use functions and derive them with respect to time and to their time-derivatives, and find equilibrium points by solving the nonlinear equations 0=rhs(eqs).
Similar things are needed to calculate the Euler-Lagrange equations, where you need the derivative of L wrt. diff(x,t).
Now my question is, how do I implement this in Sympy?
My main 2 problems are, that deriving a Symbol f wrt. t diff(f,t), I get 0. I can see, that with
x = Symbol('x',real=True);
diff(x.subs(x,x(t)),t) # because diff(x,t) => 0
and
diff(x**2, x)
does kind of work.
However, with
x = Fuction('x')(t);
diff(x,t);
I get this to work, but I cannot differentiate wrt. the funtion x itself, like
diff(x**2,x) -DOES NOT WORK.
Since I need these things, especially not only for scalars, but for vectors (using jacobian) all the time, I really want this to be a clean and functional workflow.
Which kind of type should I initiate my mathematical functions in Sympy in order to avoid strange substitutions?
It only gets worse for matricies, where I cannot get
eqns = Matrix([f1-5, f2+1]);
variabs = Matrix([f1,f2]);
nonlinsolve(eqns,variabs);
to work as expected, since it only allows symbols as input. Is there an easy conversion here? Like eqns.tolist() - which doesn't work either?
EDIT:
I just found this question, which was answered towards using expressions and matricies. I want to be able to solve sets of nonlinear equations, build the jacobian of a vector wrt. another vector and derive wrt. functions as stated above. Can anyone point me into a direction to start a concise workflow for this purpose? I guess the most complex task is calculating the Lie-derivative wrt. a vector or list of functions, the rest should be straight forward.
Edit 2:
def substi(expr,variables):
return expr.subs( {w:w(t)} )
would automate the subsitution, such that substi(vector_expr,varlist_vector).diff(t) is not all 0.
Yes, one has to insert an argument in a function before taking its derivative. But after that, differentiation with respect to x(t) works for me in SymPy 1.1.1, and I can also differentiate with respect to its derivative. Example of Euler-Lagrange equation derivation:
t = Symbol("t")
x = Function("x")(t)
L = x**2 + diff(x, t)**2 # Lagrangian
EL = -diff(diff(L, diff(x, t)), t) + diff(L, x)
Now EL is 2*x(t) - 2*Derivative(x(t), t, t) as expected.
That said, there is a build-in method for Euler-Lagrange:
EL = euler_equations(L)
would yield the same result, except presented as a differential equation with right-hand side 0: [Eq(2*x(t) - 2*Derivative(x(t), t, t), 0)]
The following defines x to be a function of t
import sympy as s
t = s.Symbol('t')
x = s.Function('x')(t)
This should solve your problem of diff(x,t) being evaluated as 0. But I think you will still run into problems later on in your calculations.
I also work with calculus of variations and Euler-Lagrange equations. In these calculations, x' needs to be treated as independent of x. So, it is generally better to use two entirely different variables for x and x' so as not to confuse Sympy with the relationship between those two variables. After we are done with the calculations in Sympy and we go back to our pen and paper we can substitute x' for the second variable.

Get all positive integral solutions for a linear equation

A game I played has a riddle that involves solving the following equation:
x*411 + y*295 + z*161 = 3200
Not wanting to think I just slapped it into sympy, which I haven’t really used up to that point:
>>> from sympy import *
>>> x, y, z = symbols('x y z', integer=True, positive=True)
>>> solve(x*411 + y*295 + z*161 - 3200, [x, y, z])
[{x: -295*y/411 - 161*z/411 + 3200/411}]
Hmm, this only gave me a dependent solution, but I want all possible solutions in the domain I constrained the variables to, e.g. (assuming there are no other solutions) [{x: 4, y: 2, z:6}] or [(4, 2, 6)]
Of course I could now manually substitute two variables in a nested loop, or solve it by hand (as I did to get the solution above), but I want to know how to get sympy (or another library) to do it for me.
SymPy can solve Diophantine equations but doesn't have a built-in way to generate positive solutions. With Sage one can do that easily: here is four-line code that generates all nonnegative integer solutions of your equation.
p = MixedIntegerLinearProgram()
w = p.new_variable(integer=True, nonnegative=True)
p.add_constraint(411*w[0] + 295*w[1] + 161*w[2] == 3200)
p.polyhedron().integral_points()
The output is ((4, 2, 6),)
Behind the scenes, integral_points will most likely just run a multiple loop; although when that doesn't seem to work it tries to use Smith normal form.
I know you wanted positive solutions, but (a) it's easy to exclude any zero-containing tuples from the answer; (b) it's also easy to replace x by x-1, etc, prior to solving; (c) sticking to "nonnegative" makes it easy to create a polyhedron using Mixed Integer Linear Programming module
as above.
According to documentation one can also build a Polyhedron object directly from a system of inequalities ("Hrep"). This would allow one to explicitly say x >= 1, etc, but I haven't succeeded at this route.
With SymPy
The output of SymPy's Diophantine module is a parametric solution, like
(t_0, 2627*t_0 + 161*t_1 - 19200, -4816*t_0 - 295*t_1 + 35200)
in your example. This can be used in a loop to generate solutions in a pretty efficient way. The sticky point is finding bounds for parameters t_0 and t_1. Since this is just an illustration, I looked at the last expression above and plugged the limits 35200/4816 and 35200/295 directly in the loops below.
from sympy import *
x, y, z = symbols('x y z')
[s] = diophantine(x*411 + y*295 + z*161 - 3200)
print(s)
t_0, t_1 = s[2].free_symbols
for t0 in range(int(35200/4816)+1):
for t1 in range(int(35200/295)+1):
sol = [expr.subs({t_0: t0, t_1: t1}) for expr in s]
if min(sol) > 0:
print(sol)
The output is [4, 2, 6].

Curvefitting optimization error when fitting piecewise linear function

I have some data in two arrays, which appears to have a break in it. I want my code to figure out where the break is with using piecewise in scipy. Here is what I have:
from scipy import optimize
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
x = np.array([7228,7620,7730,7901,8139,8370,8448,8737,8824,9089,9233,9321,9509,9568,9642,9756,9915,10601,10942], dtype=np.float)
y= np.array([.874,.893,.8905,.8916,.9095,.9142,.9109,.9185,.9169,.9251,.9290,.9304,.9467,.9378,0.9464,0.9508,0.9583,0.9857,0.9975],dtype=np.float)
def piecewise_linear(x, x0, y0, k1, k2):
return np.piecewise(x, [x < x0], [lambda x:k1*x + y0-k1*x0, lambda x:k2*x + y0-k2*x0])
p , e = optimize.curve_fit(piecewise_linear, x, y)
perr = np.sqrt(np.diag(e))
xd = np.linspace(7228, 11000, 3000)
plt.plot(x, y, "o")
plt.plot(xd, piecewise_linear(xd, *p))
My issue is if I run this, I get an error, "OptimizeWarning: Covariance of the parameters could not be estimated
category=OptimizeWarning)". Not sure how to get around this? Is there maybe a way to feed initial parameters into this function to help it converge or similar?
Note, I do realize that the other way I could be getting this to work is interpolating and finding the second derivative of my data. I've already done this, but because my data is not evenly spaced/ the y axis data has some error in it I am interested in getting it to work this way as well for statistical purposes. So, to be clear, what I want here are the parameters for the two lines (slope/intercept), and the inflection point. (Ideally I would love to get an error too on these too, but not sure if that's possible with this method.) Thanks in advance!
The code works perfectly fine, only the initial values are causing problems.
By default curve_fit starts with all parameters set to 1. Thus, x0 starts way out of range of the x in your data and the optimizer cannot compute a sensible gradient.
This small modification will fix the issue:
# make sure initial x0 and y0 are in range of the data
p0 = [np.mean(x), np.mean(y), 1, 1]
p , e = optimize.curve_fit(piecewise_linear, x, y, p0) # set initial parameter estimates
perr = np.sqrt(np.diag(e))
xd = np.linspace(7228, 11000, 3000)
plt.plot(x, y, "o")
plt.plot(xd, piecewise_linear(xd, *p))
print(p) # [ 9.32099947e+03 9.32965835e-01 2.58225121e-05 4.05400820e-05]
print(np.diag(e)) # [ 4.56978067e+04 5.52060368e-05 3.88418404e-12 7.05010755e-12]
Probably your software uses an iterative method starting from an initial guess. Generally the initial guess is the weakness of those methods.
If you want to overcome this kind of difficulty, use a non iterative method which don't require an initial guess. If the criteria of fitting of the non iterative method is not convenient for you, nevertheless first use the non iterative method to obtain a first solution. Then use a classical iterative method, starting from the solution found first.
For example, the next result is obtained thanks to the very simple algorithm (not iterative, no initial guess) which is given pp. 12-13in the paper : https://fr.scribd.com/document/380941024/Regression-par-morceaux-Piecewise-Regression-pdf

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories