The following problem has been researched - primarily with matplotlib in python.
"Basic" functions are possible, such as y = x^2, but if I want to plot an equation (which isn't necessarily a function due to multiple x-y associations), e.g.:
x^2 + y^2 = 1 (just a basic circle with a radius of 1 around the point (0/0) in a two-dimensional coordinate system).
Is there any way to plot such equation with matplotlib or any library alike?
The idea of re-writing the equation to a drawable function has come to my mind, but due to the absolute value assignment it just looks harder than the original equation, e.g. the equation above into a "function": |y| = sqrt(1-x²) with -y and +y.
//EDIT: On request from #mkrieger1 an edit of this question.
The aim of my software is to use an input (given by another function; string representing any equation, e.g. "y^3-sqrt(sin(x^2)-2)*2 = 3x") and turn it into a plot. I personally failed with the approach to solve the functions for y (as mentioned previously), especially with more complex functions. Splitting these equations into "smaller pieces" is, given the broad variety of mathematical inputs, pretty hard as well, thus I thought that going with a Contour-solving approach would be the best part. (As #mkrieger1 suggested).
Once again, this approach is critical due to a needed "editing" of the equation before implementing it in a plt.contour(X, Y, func, [0]), as well as a UserWarning later on.
You can also use sympy to convert an expression in a string to an equation and then plot it. I left out the -2 of the example, as this would lead to a quite empty plot. Sympy's parser supports special functions to allow multiplication be left out (as in 3x) and to convert Python's xor function (^) to a power.
from sympy import plot_implicit, Eq
from sympy.parsing.sympy_parser import parse_expr
from sympy.parsing.sympy_parser import standard_transformations, convert_xor, implicit_multiplication
string = "y^3-sqrt(sin(x^2))*2 = 3x"
transformations = (standard_transformations + (implicit_multiplication,) + (convert_xor,))
lhs = parse_expr(string.split('=')[0], transformations=transformations)
rhs = parse_expr(string.split('=')[1], transformations=transformations)
plot_implicit(Eq(lhs, rhs))
Another example:
from sympy import plot_implicit, Eq, cos
from sympy.abc import x, y
plot_implicit(Eq(x/y, cos(y)), (x, -10, 10), (y, -10, 10))
Note that without explicitly setting the range for the variables, plot_implicit supposes default ranges between -5 and 5.
If you use matplotlib at all, you will notice that plot accepts a pair of arrays of equal length, representing sequences of x-y pairs. It has no knowledge of functions, equations, or any of the other concepts you mention.
The assertion that plotting a simple function is supported is therefore largely meaningless, even if true. That being said, a standard approach to converting something that is a non-function in Cartesian space, like a circle, is to parametrize it. One possible parameterization for many popular non-functions is to use polar coordinates.
For example:
t = np.linspace(0, 2 * np.pi, 100) # the parameter
x = np.cos(t)
y = np.sin(t)
Related
With N 1-dimensional data X, I would like to evaluate each point at K cubic B-splines. In R, there is a simple function with an intuitive API, called bs. There is actually a python package patsy which replicates this, but I can't use that package -- only scipy and such.
Having looked through the scipy.interpolate documentation on spline-related functions, the closest I can find is BSpline, or BSpline.basis_element, but how to get just the K basis functions is totally mysterious to me. I tried the following:
import numpy as np
import scipy.interpolate as intrp
import matplotlib.pyplot as plt
import patsy # for comparison
# in Patsy/R: nice and sensible
x = np.linspace(0., 1., 100)
y = patsy.bs(x, knots=np.linspace(0,1,4), degree=3)
plt.subplot(1,2,1)
plt.plot(x,y)
plt.title('B-spline basis')
# in scipy: ?????
y_py = np.zeros((x.shape[0], 6))
for i in range(6):
y_py[:,i] = intrp.BSpline(np.linspace(0,1,10),(np.arange(6)==i).astype(float), 3, extrapolate=False)(x)
plt.subplot(1,2,2)
plt.plot(x,y_py)
plt.title('Something else')
It doesn't work, and makes me realise I don't actually know what this function is doing. First of all, it will not accept fewer than 8 interior knots, which I don't understand why. Secondly, it only thinks that the splines are defined within (1/3, 2/3)ish range, which maybe means that it is ignoring the first 3 and last 3 knot values for some reason? Do I need to pad the knots?
Any help would be appreciated!
EDIT: I have solved this discrepancy, indeed it seems like BSpline ignore the first 3 and last 3 values of knots. I'm still interested in knowing why there is this discrepancy, so that I feel less bad for the odd hour spent debugging a strange interface.
For posterity, here is the code that does produce the basis functions
import numpy as np
import scipy.interpolate as intrp
import matplotlib.pyplot as plt
import patsy # for comparison
these_knots = np.linspace(0,1,5)
# in Patsy/R: nice and sensible
x = np.linspace(0., 1., 100)
y = patsy.bs(x, knots=these_knots, degree=3)
plt.subplot(1,2,1)
plt.plot(x,y)
plt.title('B-spline basis')
# in scipy: ?????
numpyknots = np.concatenate(([0,0,0],these_knots,[1,1,1])) # because??
y_py = np.zeros((x.shape[0], len(these_knots)+2))
for i in range(len(these_knots)+2):
y_py[:,i] = intrp.BSpline(numpyknots, (np.arange(len(these_knots)+2)==i).astype(float), 3, extrapolate=False)(x)
plt.subplot(1,2,2)
plt.plot(x,y_py)
plt.title('In SciPy')
Looks like you already found the answer, but to clarify why these you need to define the multiple knots at the edges, you can read the scipy docs. They are defined using the Cox-de Boor recursive formula. This formula starts with defining neighbouring support domains between the given knot points with a constant value of 1 (zeroth order). These are convoluted to acquire the higher order basis functions. Hence two domains make one first order basis function, three domains make one second order basis function and four domains (= 5 knot points) make one third order basis function that is supported within the range of these 5 knot points. If you want n basis functions of degree k = 3, you will need to have (n+k+1) knot points.
The minimum of 8 knots is such that n >= k + 1, which gives 2 * (k+1). The base interval t[k] ... t[n] in scipy is the only range where you can define full degree basis functions. To make sure that this base interval reaches the outer knot points, the two end knots are usually given a multiplicity of (k+1). Probably scipy only showed this base interval in your 'Something else' result.
Note that you can also get the basis functions using
y_py[:,i] = intrp.BSpline.basis_element(numpyknots[i:i+5], extrapolate=False)(x)
this also removes the difference at x = 1.
I'm working with nonlinear systems of equations. These systems are generally a nonlinear vector differential equation.
I now want to use functions and derive them with respect to time and to their time-derivatives, and find equilibrium points by solving the nonlinear equations 0=rhs(eqs).
Similar things are needed to calculate the Euler-Lagrange equations, where you need the derivative of L wrt. diff(x,t).
Now my question is, how do I implement this in Sympy?
My main 2 problems are, that deriving a Symbol f wrt. t diff(f,t), I get 0. I can see, that with
x = Symbol('x',real=True);
diff(x.subs(x,x(t)),t) # because diff(x,t) => 0
and
diff(x**2, x)
does kind of work.
However, with
x = Fuction('x')(t);
diff(x,t);
I get this to work, but I cannot differentiate wrt. the funtion x itself, like
diff(x**2,x) -DOES NOT WORK.
Since I need these things, especially not only for scalars, but for vectors (using jacobian) all the time, I really want this to be a clean and functional workflow.
Which kind of type should I initiate my mathematical functions in Sympy in order to avoid strange substitutions?
It only gets worse for matricies, where I cannot get
eqns = Matrix([f1-5, f2+1]);
variabs = Matrix([f1,f2]);
nonlinsolve(eqns,variabs);
to work as expected, since it only allows symbols as input. Is there an easy conversion here? Like eqns.tolist() - which doesn't work either?
EDIT:
I just found this question, which was answered towards using expressions and matricies. I want to be able to solve sets of nonlinear equations, build the jacobian of a vector wrt. another vector and derive wrt. functions as stated above. Can anyone point me into a direction to start a concise workflow for this purpose? I guess the most complex task is calculating the Lie-derivative wrt. a vector or list of functions, the rest should be straight forward.
Edit 2:
def substi(expr,variables):
return expr.subs( {w:w(t)} )
would automate the subsitution, such that substi(vector_expr,varlist_vector).diff(t) is not all 0.
Yes, one has to insert an argument in a function before taking its derivative. But after that, differentiation with respect to x(t) works for me in SymPy 1.1.1, and I can also differentiate with respect to its derivative. Example of Euler-Lagrange equation derivation:
t = Symbol("t")
x = Function("x")(t)
L = x**2 + diff(x, t)**2 # Lagrangian
EL = -diff(diff(L, diff(x, t)), t) + diff(L, x)
Now EL is 2*x(t) - 2*Derivative(x(t), t, t) as expected.
That said, there is a build-in method for Euler-Lagrange:
EL = euler_equations(L)
would yield the same result, except presented as a differential equation with right-hand side 0: [Eq(2*x(t) - 2*Derivative(x(t), t, t), 0)]
The following defines x to be a function of t
import sympy as s
t = s.Symbol('t')
x = s.Function('x')(t)
This should solve your problem of diff(x,t) being evaluated as 0. But I think you will still run into problems later on in your calculations.
I also work with calculus of variations and Euler-Lagrange equations. In these calculations, x' needs to be treated as independent of x. So, it is generally better to use two entirely different variables for x and x' so as not to confuse Sympy with the relationship between those two variables. After we are done with the calculations in Sympy and we go back to our pen and paper we can substitute x' for the second variable.
suppose I have the following Problem:
I have a complex function A(x) and a complex function B(y). I know these functions cross in the complex plane. I would like to find out the corresponding x and y of this intersection point, numerically ( and/or graphically). What is the most clever way of doing that?
This is my starting point:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
def A_(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
A = np.vectorize(A_)
def B_(y):
return 3/(1j*y*(1+1j*y))
B = np.vectorize(B_)
real_A = np.real(A(x))
imag_A = np.imag(A(x))
real_B = np.real(B(y))
imag_B = np.imag(B(y))
plt.plot(real_A, imag_A, color='blue')
plt.plot(real_B, imag_B, color='red')
plt.show()
I don't have to plot it necessarily. I just need x_intersection and y_intersection (with some error that depends on x and y).
Thanks a lot in advance!
EDIT:
I should have used different variable names. To clarify what i need:
x and y are numpy arrays and i need the index of the intersection point of each array plus the corresponding x and y value (which again is not the intersection point itself, but some value of the arrays x and y ).
Here I find the minimum of the distance between the two curves. Also, I cleaned up your code a bit (eg, vectorize wasn't doing anything useful).
import matplotlib.pyplot as plt
import numpy as np
from numpy import sqrt, pi
from scipy import optimize
def A(x):
return -1/( 8/(pi*x)*sqrt(1-(1/x)**2) - 1j*(8/(pi*x**2)) )
def B(y):
return 3/(1j*y*(1+1j*y))
# The next three lines find the intersection
def dist(x):
return abs(A(x[0])-B(x[1]))
sln = optimize.minimize(dist, [1, 1])
# plotting everything....
a0, b0 = A(sln.x[0]), B(sln.x[1])
x = np.linspace(1, 10, 10000)
y = np.linspace(1, 60, 10000)
a, b = A(x), B(y)
plt.plot(a.real, a.imag, color='blue')
plt.plot(b.real, b.imag, color='red')
plt.plot(a0.real, a0.imag, "ob")
plt.plot(b0.real, b0.imag, "xr")
plt.show()
The specific x and y values at the intersection point are sln.x[0] and sln.x[1], since A(sln.x[0])=B(sln.x[1]). If you need the index, as you also mention in your edit, I'd use, for example, numpy.searchsorted(x, sln.x[0]), to find where the values from the fit would insert into your x and y arrays.
I think what's a bit tricky with this problem is that the space for graphing where the intersection is (ie, the complex plane) does not show the input space, but one has to optimize over the input space. It's useful for visualizing the solution, then, to plot the distance between the curves over the input space. That can be done like this:
data = dist((X, Y))
fig, ax = plt.subplots()
im = ax.imshow(data, cmap=plt.cm.afmhot, interpolation='none',
extent=[min(x), max(x), min(y), max(y)], origin="lower")
cbar = fig.colorbar(im)
plt.plot(sln.x[0], sln.x[1], "xw")
plt.title("abs(A(x)-B(y))")
From this it seems much more clear how optimize.minimum is working -- it just rolls down the slope to find the minimum distance, which is zero in this case. But still, there's no obvious single visualization that one can use to see the whole problem.
For other intersections one has to dig a bit more. That is, #emma asked about other roots in the comments, and there I mentioned that there's no generally reliable way to find all roots to arbitrary equations, but here's how I'd go about looking for other roots. Here I won't lay out the complete program, but just list the changes and plots as I go along.
First, it's obvious that for the domain shown in my first plot that there's only one intersection, and that there are no intersection in the region to the left. The only place there could be another intersection is to the right, but for that I'll need to allow the sqrt in the def of B to get a negative argument without throwing an exception. An easy way to do this is to add 0j to the argument of the sqrt, like this, sqrt(1+0j-(1/x)**2). Then the plot with the intersection becomes
I plotted this over a broader range (x=np.linspace(-10, 10, 10000) and y=np.linspace(-400, 400, 10000)) and the above is the zoom of the only place where anything interesting is going on. This shows the intersection found above, plus the point where it looks like the two curves might touch (where the red curve, B, comes to a point nearly meeting the blue curve A going upward), so that's the new interesting thing, and the thing I'll look for.
A bit of playing around with limits, etc, show that B is coming to a point asymptotically, and the equation of B is obvious that it will go to 0 + 0j for large +/- y, so that's about all there is to say for B.
It's difficult to understand A from the above plot, so I'll look at the real and imaginary parts independently:
So it's not a crazy looking function, and the jumping between Re=const and Im=const is just the nature of sqrt(1-x-2), which is pure complex for abs(x)<1 and pure real for abs(x)>1.
It's pretty clear now that the other time the curves are equal is at y= +/-inf and x=0. And, quick look at the equations show that A(0)=0+0j and B(+/- inf)=0+0j, so this is another intersection point (though since it occurs at B(+/- inf), it's sort-of ambiguous on whether or not it would be called an intersection).
So that's about it. One other point to mention is that if these didn't have such an easy analytic solution, like it wasn't clear what B was at inf, etc, one could also graph/minimize, etc, by looking at B(1/y), and then go from there, using the same tools as above to deal with the infinity. So using:
def dist2(x):
return abs(A(x[0])-B(1./x[1]))
Where the min on the right is the one initially found, and the zero, now at x=-0 and 1./y=0 is the other one (which, again, isn't interesting enough to apply an optimizer here, but it could be interesting in other equations).
Of course, it's also possible to estimate this by just finding the minimum of the data that goes into the above graph, like this:
X, Y = np.meshgrid(x, y)
data = dist((X, Y))
r = np.unravel_index(data.argmin(), data.shape)
print x[r[1]], y[r[0]]
# 2.06306306306 1.8008008008 # min approach gave 2.05973231 1.80069353
But this is only approximate (to the resolution of data) and involved many more calculations (1M compared to a few hundred). I only post this because I think it might be what the OP originally had in mind.
Briefly, two analytic solutions are derived for the roots of the problem. The first solution removes the parametric representation of x and solves for the roots directly in the (u, v) plane, where for example A(x): u(x) + i v(y) gives v(u) = f(u). The second solution uses a polar representation, e.g. A(x) is given by r(x) exp(i theta(x)), and offers a better understanding of the behavior of the square root as x passes through unity towards zero. Possible solutions occurring at the singular points are explored. Finally, a bisection root finding algorithm is constructed as a Python iterator to invert certain solutions. Summarizing, the one real root can be found as a solution to either of the following equations:
and gives:
x0 = -2.059732
y0 = +1.800694
A(x0) = B(y0) = (-0.707131, -i 0.392670)
As in most problems there are a number of ways to proceed. One can use a "black box" and hopefully find the root they are looking for. Sometimes an answer is all that is desired, and with a little understanding of the functions this may be an adequate way forward. Unfortunately, it is often true that such an approach will provide less insight about the problem then others.
For example, algorithms find it difficult locating roots in the global space. Local roots may be found with other roots lying close by and yet undiscovered. Consequently, the question arises: "Are all the roots accounted for?" A more complete understanding of the functions, e.g. asymptotic behaviors, branch cuts, singular points, can provide the global perspective to better answer this, as well as other important questions.
So another possible solution would be building one's own "black box." A simple bisection routine might be a starting point. Robust if the root lies in the initial interval and fairly efficient. This encourages us to look at the global behavior of the functions. As the code is structured and debugged the various functions are explored, new insights are gained, and the algorithm has become a tool towards a more complete solution to the problem. Perhaps, with some patience, a closed-form solution can be found. A Python iterator is constructed and listed below implementing a bisection root finding algorithm.
Begin by putting the functions A(x) and B(x) in a more standard form:
C(x) = u(x) + i v(x)
and here the complex number i is brought out of the denominator and into the numerator, casting the problem into the form of functions of a complex variable. The new representation simplifies the original functions considerably. The real and imaginary parts are now clearly separated. An interesting graph is to plot A(x) and B(x) in the 3-dimensional space (u, v, x) and then visualize the projection into the u-v plane.
import numpy as np
from numpy import real, imag
import matplotlib.pyplot as plt
ax = fig.gca(projection='3d')
s = np.linspace(a, b, 1000)
ax.plot(f(s).real, f(s).imag, z, color='blue')
ax.plot(g(s).real, g(s).imag, z, color='red')
ax.plot(f(s).real, f(s).imag, 0, color='black')
ax.plot(g(s).real, g(s).imag, 0, color='black')
The question arises: "Can the parametric representation be replaced so that a relationship such as,
A(x): u(x) + i v(x) gives v(u) = f(u)
is obtained?" This will provide A(x) as a function v(u) = f(u) in the u-v plane. Then, if for
B(x): u(x) + i v(x) gives v(u) = g(u)
a similar relationship can be found, the solutions can be set equal to one another,
f(u) = g(u)
and the root(s) computed. In fact, it is convenient to look for a solution in the square of the above equation. The worst case is that an algorithm will have to be built to find the root, but at this point the behavior of our functions are better understood. For example, if f(u) and g(u) are polynomials of degree n then it is known that there are n roots. The best case is that a closed-form solution might be a reward for our determination.
Here is more detail to the solution. For A(x) the following is derived:
and v(u) = f(u) is just v(u) = constant. Similarly for B(x) a slightly more complex form is required:
Look at the function g(u) for B(x). It is imaginary if u > 0, but the root must be real since f(u) is real. This means that u must be less then 0, and there is both a positive and negative real branch to the square root. The sign of f(u) then allows one to pick the negative branch as the solution for the root. So the fact that the solution must be real is determined by the sign of u, and the fact that the real root is negative specifies what branch of the square root to choose.
In the following plot both the real (u < 0) and complex (u > 0) solutions are shown.
The camera looks toward the origin in the back corner, where the red and blue curves meet. The z-axis is the magnitude of f(u) and g(u). The x and y axes are the real/complex values of u respectively. The blue curves are the real solution with (3 - |u|). The red curves are the complex solution with (3 + |u|). The two sets meet at u = 0. The black curve is f(u) equal to (-pi/8).
There is a divergence in g(u) at |u| = 3 and this is associated with x = 0. It is far removed from the solution and will not be considered further.
To obtain the roots to f = g it is easier to square f(u) and equate the two functions. When the function g(u) is squared the branches of the square root are lost, much like squaring the solutions for x**2 = 4. In the end the appropriate root will be chosen by the sign of f(u) and so this is not an issue.
So by looking at the dependence of A and B, with respect to the parametric variable x, a representation for these functions was obtained where v is a function of u and the roots found. A simpler representation can be obtained if the term involving c in the square root is ignored.
The answer gives all the roots to be found. A cubic equation has at most three roots and one is guaranteed to be real. The other two may be imaginary or real. In this case the real root has been found and the other two roots are complex. Interestingly, as c changes these two complex roots may move into the real plane.
In the above figure the x-axis is u and the y axis is the evaluated cubic equation with constant c. The blue curve has c as (pi/8) squared. The red curve uses a larger and negative value for c, and has been translated upwards for purposes of demonstration. For the blue curve there is an inflection point near (0, 0.5), while the red curve has a maximum at (-0.9, 2.5) and a minimum at (0.9, -0.3).
The intersection of the cubic with the black line represents the roots given by: u**3 + c u + 3c = 0. For the blue curve there is one intersection and one real root with two roots in the complex plane. For the red curve there are three intersections, and hence 3 roots. As we change the value of the constant c (blue to red) the one real root undergoes a "pitchfork" bifurcation, and the two roots in the complex plane move into the real space.
Although the root (u_t, v_t) has been located, obtaining the value for x requires that (u, v) be inverted. In the present example this is a trivial matter, but if not, a bisection routine can be used to avoid the algebraic difficulties.
The parametric representation also leads to a solution for the real root, and it rounds out the analysis with an independent verification of the first result. Second, it answers the question about what happens at the singularity in the square root. Third, it gives a greater understanding of the multiplicity of roots.
The steps are: (1) convert A(x) and B(x) into polar form, (2) equate the modulus and argument giving two equations in two unknowns, (3) make a substitution for z = x**2. Converting A(x) to polar form:
Absolute value bars are not indicated, and it should be understood that the moduli r(x) and s(x) are positive definite as their names imply. For B(x):
The two equations in two unknowns:
Finally, the cubic solution is sketched out here where the substitution z = x**2 has been made:
The solution for z = x**2 gives x directly, which allows one to substitute into both A(x) and B(x). This is an exact solution if all terms are maintained in the cubic solution, and there is no error in x0, y0, A(x0), or B(x0). A simpler representation can be found by considering terms proportional to 1/d as small.
Before leaving the polar representation consider the two singular points where: (1) abs(x) = 1, and (2) x = 0. A complicating factor is that the arguments behave something like 1/x instead of x. It is worthwhile to look at a plot of the arctan(a) and then ask yourself how that changes if its argument is replaced by 1/a. The following graphs will then look less foreign.
Consider the polar representation of B(x). As x approaches 0 the modulus and argument tend toward infinity, i.e. the point is infinitely far from the origin and lies along the y-axis. Approaching 0 from the negative direction the point lies along the negative y-axis with varphi = (-pi/2), while approaching from the other direction the point lies along the positive y-axis with varphi = (+pi/2).
A somewhat more complicated behavior is exhibited by A(x). A(x) is even in x since the modulus is positive definite and the argument involves only x**2. There is a symmetry across the y-axis that allows us to only consider the x > 0 plane.
At x = 1 the modulus is just (pi/8), and as x continues to approach 0 so does r(x). The behavior of the argument is more complex. As x approaches unity from large positive values the argument is diverging towards +inf and so theta is approaching (+pi/2). As x passes through 1 the argument becomes complex. At x equals 0 the argument has reached its minimum value of -i. For complex arguments the arctan is given by:
The following are plots of the arguments for A(x) and B(x). The x-axis is the value of x, and the y-axis is the value of the angle in units of pi. In the first plot theta is shown in blue curves, and as x approaches 1 the angle approaches (+pi/2). Theta is real because abs(x) >= 1, and notice it is symmetric across the y-axis. The black curve is varphi and as x approaches 0 it approaches plus or minus (pi/2). Notice it is an odd function in x.
In the second plot A(x) is shown where abs(x) < 1 and the argument becomes complex. Near x = 1 theta is equal to (+pi/2), the blue curve, minus a small imaginary part, the red curve. As x approaches zero theta is equal to (+pi/2) minus a large imaginary part. At x equals 0 the argument is equal to -i and theta = (+pi/2) minus an infinite imaginary part, i.e ln(0) = -inf:
The values for x0 and y0 are determined by the set of equations that equate modulus and argument of A(x) and B(x), and there are no other roots. If x0 = 0 was a root, then it would fall out of these equations. The same holds for x0 = 1. In fact, if one uses approximations in the argument of A(x) about these points, and then substitutes into the equation for the modulus, the equality cannot be maintained there.
Here is another perspective: consider the set of equations where x is assumed large and call it x_inf. The equation for the argument then gives x_inf = y_inf, where 1 is neglected with respect to x_inf squared. Upon substitution into the second equation a cubic is obtained in x_inf. Will this give the correct answer? Yes, if x0 is actually large, and in this case you might get away with it since x0 is approximately 2. The difference between the sqrt(4) and the sqrt(5) is around 10%. But does this mean that x_inf = 100 is a solution? No it does not: x_inf is only a solution if it equals x0.
The initial reason for examining the problem in the first place was to find a context for building a root-finding bisection routine as a Python iterator. This can be used to find any of the roots discussed here, and looks something like this:
class Bisection:
def __init__(self, a, b, func, max_iter):
self.max_iter = max_iter
self.count_iter = 0
self.a = a
self.b = b
self.func = func
fa = func(self.a)
fb = func(self.b)
if fa*fb >= 0.0:
raise ValueError
def __iter__(self):
self.x1 = self.a
self.x2 = self.b
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
return self
def __next__(self):
f1 = self.func(self.x1)
f2 = self.func(self.x2)
error = abs(f1 - f2)
fmid = self.func(self.xmid)
if fmid == 0.0:
return self.xmid
if f1*fmid < 0:
self.x2 = self.xmid
else:
self.x1 = self.xmid
self.xmid = self.x1 + ((self.x2 - self.x1)/2.0)
f1 = self.func(self.x1)
fmid = self.func(self.xmid)
self.count_iter += 1
if self.count_iter >= self.max_iter:
raise StopIteration
return self.xmid
The routine does only a minimal amount in the way of catching exceptions and was used to find x for the given solution in the u-v plane. The arguments a and b give the lower and upper brackets for the root to be found. The argument func is the function for the root to be found. This might look like: u0 - B(x).real. The constant max_iterations tells the iterator to stop after a given number of bisections has been attempted.
I have recently been learning how to perform double integrals in python. This is what I am using:
myint = dblquad(lambda xa,xb: np.exp(-(xa-xb)**2),-np.inf,x1,lambda x: -np.inf, lambda x: x2)
and for testing purposes I have chosen x1 and x2 to be say 5 and 10. This seems to work.
But in reality, my x1 = [1,2,3,4,5] and x2 = [5,6,7,8,9] and I want the double integral to be performed over every combination of x1 and x2 i.e. a matrix. I could do this with 2 for loops I guess, but I thought there might be a better way.
So my question is just - how do I perform a double integration over a matrix of limits.
Thank you.
edit:
I got the following warning:
UserWarning: The maximum number of subdivisions (50) has been achieved.
If increasing the limit yields no improvement it is advised to analyze
the integrand in order to determine the difficulties. If the position of a
local difficulty can be determined (singularity, discontinuity) one will
probably gain from splitting up the interval and calling the integrator
on the subranges. Perhaps a special-purpose integrator should be used.
Does this mean that it doesn't converge? I don't really understand the message.
When I plot:
y = exp(-(x-5)^2)
for example, it just looks like a gaussian curve, so there is no problem integrating over that right? Is the problem because of the double integral?
Thank you.
edit:
Ah, I see. Thanks Raman Shah, I understand the problem now.
Using itertools you can create an iterator of limits to walk over. This essentially is a double loop, but for more extensible as you can have an arbitrary number of inputs with itertools.product and you don't store all the limits at once:
import numpy as np
from scipy.integrate import dblquad
import itertools
f = lambda xa,xb: np.exp(-(xa-xb)**2)
intg = lambda (x1,x2): dblquad(f,-np.inf,x1,
lambda x:-np.inf,
lambda x:x2)
X1 = np.arange(1,6)
X2 = np.arange(5,10)
for limit in itertools.product(X1,X2):
print limit, intg(limit)
If you need more speed, you can look into the multiprocessing module for parallel computation since each process is independent.
Why not use pythons zip function to feed exactly the values from each tuple that you want treated as inputs to your double integral, and then use map/apply to operate on those discrete pairs
I am trying to integrate a function over a list of point and pass the whole array to an integration function in order ot vectorize the thing. For starters, calling scipy.integrate.quad is way too slow since I have something like 10 000 000 points to integrate. Using scipy.integrate.romberg does the trick much faster, almost instantaneous while quad is slow since you must loop over it or vectorize it.
My function is quite complicated, but for demonstation purpose, let's say I want to integrate x^2 from a to b, but x is an array of scalar to evaluate x. For example
import numpy as np
from scipy.integrate import quad, romberg
def integrand(x, y):
return x**2 + y**2
quad(integrand, 0, 10, args=(10) # this fails since y is not a scalar
romberg(integrand, 0, 10) # y works here, giving the integral over
# the entire range
But this only work for fixed bounds. Is there a way to do something like
z = np.arange(20,30)
romberg(integrand, 0, z) # Fails since the function doesn't seem to
# support variable bounds
Only way I see it is to re-implement the algorithm itself in numpy and use that instead so I can have variable bounds. Any function that supports something like this? There is also romb, where you must supply the values of integrand directly and a dx interval, but that will be too imprecise for my complicated function (the marcum Q function, couldn't find any implementation, that could be another way to dot it).
The best approach when trying to evaluate a special function is to write a function that uses the properties of the function to quickly and accurately evaluate it in all parameter regimes. It is quite unlikely that a single approach will give accurate (or even stable) results for all ranges of parameters. Direct evaluation of an integral, as in this case, will almost certainly break down in many cases.
That being said, the general problem of evaluating an integral over many ranges can be solved by turning the integral into a differential equation and solving that. Roughly, the steps would be
Given an integral I(t) which I will assume is an integral of a function f(x) from 0 to t [this can be generalized to an arbitrary lower limit], write it as the differential equation dI/dt = f(x).
Solve this differential equation using scipy.integrate.odeint() for some initial conditions (here I(0)) over some range of times from 0 to t. This range should contain all limits of interest. How finely this is sampled depends on the function and how accurately it needs to be evaluated.
The result will be the value of the integral from 0 to t for the set of t we input. We can turn this into a "continuous" function using interpolation. For example, using a spline we can define i = scipy.interpolate.InterpolatedUnivariateSpline(t,I).
Given a set of upper and lower limits in arrays b and a, respectively, then we can evaluate them all at once as res=i(b)-i(a).
Whether this approach will work in your case will require you to carefully study it over your range of parameters. Also note that the Marcum Q function involves a semi-infinite integral. In principle this is not a problem, just transform the integral to one over a finite range. For example, consider the transformation x->1/x. There is no guarantee this approach will be numerically stable for your problem.