calculate an integral in python - python

I need to calculate an integral in python.
I have imported sympy.
g(a,z) = integral_from_z_to_inf of ( t^(a-1) * e^(-1))
in python:
x,a,z = symbols('x a z')
g = integrate(x**(a-1) * exp(-x), z, oo)
I got error:
ValueError: Invalid limits given: (z, oo)
I called:
b,c,mean,variance = S('b c mean variance'.split())
ff1 = b*g((1+c), lb / b) // lb is a constant, b and c are unknown var that I need to solve. and mean and variance are constants.
I got error:
TypeError: 'Add' object is not callable

I am on Python 2.7, but your problems seems to be not reading the documentation closely enough. The docs say:
var can be:
a symbol – indefinite integration
a tuple (symbol, a) – indefinite integration with result
given with a replacing symbol
a tuple (symbol, a, b) – definite integration
You want to perform the last one, so you need to use a tuple.
The command you are looking for is:
import sympy as sym
x, a, z = sym.symbols('x a z')
g = sym.integrate(x**(a-1) * sym.exp(-x), (x, z, sym.oo))

Related

Evaluating the solution of sympy dsolve numerically

I would like to evaluate the solution of a differential equation against some x_test
array
from sympy import *
init_printing()
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
# Constants
C, R, u_rest = symbols('C R u_rest')
f, g, h = symbols('f g h', cls=Function)
solution = dsolve(C*Derivative(f(x), x) + (1/R)*(f(x) - u_rest ),f(x))
x_text = np.array(range(0,100))
but I fail
# 1. attempt with evalf(Not working)
solution.args[1].evalf(subs={x: 3.14})
# 2. attempt with lambdify(Not working)
lambdify(x,solution.args[1])(3.14)
What is the right way to do it?
If you look at your solution.args[1] value you will see that it is an expression of several variables, including x. It will not evaluate to a number until you supply values for all variables. Your first attempt doesn't fail, but you don't explain why it is not giving you what you hoped for:
>>> solution.args[1].evalf(subs={x: 3.14})
u_rest*(1.0 - exp((C1 - 3.14/R)/C))

Find zeroes of function with simpy.optimize.bisect, with a complex function

I'm a student of mechanical engineering, and this is the first year I've met with the Python environment, or the distribution of it Anaconda.
I was given a task to find the zeroes of this function:
𝐷⋅sin(𝛼)cos(𝛼)+𝑙⋅cos(𝛼)sin(𝛼)2βˆ’π‘™β‹…cos(𝛼)βˆ’β„Žβ‹…sin(𝛼)=0
With the parameters:
D = 220mm,
h = 1040mm,
l = 1420mm,where
n = 81
is the number of equally distanced points on the function
and the function is limited to :
π›Όβˆˆ[0,2πœ‹] where 𝛼 is a np.array.
plotted function
The issue is, when I try to insert the function in bisect(fun, a, b), the error says
'numpy.ndarray' object is not callable
Can someone aid a noob programer ? Thanks.
The question is not clear, you should share your code and the title should say scipy, not simpy, if I am correct.
Apart from this, I do not get the same plot of the function, can you check if it is correct?
If you want to use the bisection method you should do something like this:
import numpy as np
from scipy.optimize import bisect
def fun(x, D, h, l):
return D * np.sin(x) * np.cos(x) + l * np.cos(x) * np.sin(x) * 2 - l * np.cos(x) - h * np.sin(x)
D = 220
h = 1040
l = 1420
print(bisect(lambda x: fun(x, D, h, l), 0, 2*np.pi))
Note that the bisection method only finds one zero, and this does not work at all because the two extremes of the function have the same sign. For this particular function, you could run the bisect in the intervals (0, pi) and (pi, 2pi) to find both zeros.

Laplace transform of unknown integral (function of time)

I need to calculate the Laplace transform of an integral function. It seems that sympy is not yet able to understand that.
Assuming the following:
from sympy import *
s, t = symbols('s t')
I = Function('I')(t)
eq1 = integrate(I, t)
transforms.laplace_transform(eq1, t, s)
The solution should be: I(s) / s
However, sympy gives: LaplaceTransform(Integral(I(t), t), t, s)
It seems to be an open issue Issue 7219. Is there any work around?
It seems that the issue hasn't been fixed yet.
However, we can give a "crappy workaround" based on the "crappy implementation" of Eric Wieser given for derivatives. Note, however, that the original snippet doesn't seem to work for derivatives either, because the internal representation of higher order derivatives seems to have changed since the snippet was posted.
Here's my "crappy" workaround that catches only the simplest of cases (derivatives only with respect to t, indefinite integrals only with respect to t, where t is the variable on which the Laplace transform acts):
from sympy import *
def laplace(e, t, s):
"""Hacked-up Laplace transform that handles derivatives and integrals
Updated generalization of https://github.com/sympy/sympy/issues/7219#issuecomment-154768904
"""
res = laplace_transform(e, t, s, noconds=True)
wf = Wild('f')
lw = LaplaceTransform(wf, t, s)
for exp in res.find(lw):
e = exp.match(lw)[wf]
args = e.args
if isinstance(e, Derivative):
# for derivative check that there's only d/dt^n with n>0
if len(args) == 2 and args[1][0] == t:
n = args[1][1]
if n > 0:
newexp = s**n * LaplaceTransform(e.args[0], t, s)
res = res.replace(exp, newexp)
elif isinstance(e, Integral):
# for integral check that there's only n consecutive indefinite integrals w.r.t. t
if all(len(arg) == 1 and arg[0] == t for arg in args[1:]):
newexp = s**(-len(args[1:])) * LaplaceTransform(args[0], t, s)
res = res.replace(exp, newexp)
# otherwise don't do anything
return res
x = Function('x')
s,t = symbols('s t')
print(laplace(Derivative(x(t), t, 3), t, s))
print(laplace(Integral(Integral(x(t), t), t), t, s))
The above outputs
s**3*LaplaceTransform(x(t), t, s)
LaplaceTransform(x(t), t, s)/s**2
as expected. Using your specific example:
I = Function('I')(t)
eq1 = integrate(I, t)
LI = laplace(eq1, t, s)
print(LI)
we get
LaplaceTransform(I(t), t, s)/s
which is the correct representation of "I(s)/s" that you expected.
The way the above workaround works is that it matches the arguments of the LaplaceTransform and checks if there's a pure Derivative or Integral inside. For Derivative we check that there's only differentiation with respect to t; this is what Eric's original workaround did, but while his code seems to have expected args of the form Derivative(x(t), t, t, t), the current representation of derivatives is Derivative(x(t), (t,3)). This is why handling this use case had to be changed.
As for Integrals, the representation is similar to the original one: Integral(x(t), t, t) is a double integral. I still had to tweak Eric's original, because the args of this expression contain tuples for each integral rather than a scalar t, in order to accommodate definite integrals. Since we only want to handle the no-brainer case of indefinite integrals, I made sure that there's only indefinite integrals and only with respect to t.
If the argument of the LaplaceTransform is anything else, the expression is left alone.

How to put an integral in a function in python/matplotlib

So pretty much, I am aiming to achieve a function f(x)
My problem is that my function has an integral in it, and I only know how to construct definite integrals, so my question is how does one create an indefinite integral in a function (or there may be some other method I am currently unaware of)
My function is defined as :
(G is gravitational constant, although you can leave G out of your answer for simplicity, I'll add it in my code)
Here is the starting point, but I don't know how to do the integral portion
import numpy as np
def f(x):
rho = 5*(1/(1+((x**2)/(3**2))))
function_result = rho * 4 * np.pi * x**2
return function_result
Please let me know if I need to elaborate on something.
EDIT-----------------------------------------------------
I made some major progress, but I still have one little error.
Pretty much, I did this:
from sympy import *
x = Symbol('x')
rho = p0()*(1/(1+((x**2)/(rc()**2))))* 4 * np.pi * x**2
fooply = integrate(rho,x)
def f(rx):
function_result = fooply.subs({x:rx})
return function_result
Which works fine when I plug in one number for f; however, when I plug in an array (as I need to later), I get the error:
raise SympifyError(a)
sympy.core.sympify.SympifyError: SympifyError: [3, 3, 3, 3, 3]
(Here, I did print(f([3,3,3,3,3]))). Usually, the function returns an array of values. So if I did f([3,2]) it should return [f(3),f(2)]. Yet, for some reason, it doesn't for my function....
Thanks in advance
how about:
from sympy import *
x, p0, rc = symbols('x p0 rc', real=True, positive=True)
rho = p0*(1/(1+((x**2)/(rc))))* 4 * pi * x**2
fooply = integrate(rho,x)/x
rho, fooply
(4*pi*p0*x**2/(1 + x**2/rc),
4*pi*p0*rc*(-sqrt(rc)*atan(x/sqrt(rc)) + x)/x)
fooply = fooply.subs({p0: 2.0, rc: 3.0})
np_fooply = lambdify(x, fooply, 'numpy')
print(np_fooply(np.array([3,3,3,3,3])))
[ 29.81247362 29.81247362 29.81247362 29.81247362 29.81247362]
To plug in an array to a SymPy expression, you need to use lambdify to convert it to a NumPy function (f = lambdify(x, fooply)). Just using def and subs as you have done will not work.
Also, in general, when using symbolic computations, it's better to use sympy.pi instead of np.pi, as the former is symbolic and can simplify. It will automatically be converted to the numeric pi by lambdify.

Solving a system of odes (with changing constant!) using scipy.integrate.odeint?

I currently have a system of odes with a time-dependent constant. E.g.
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a * x + y * z
dy_dt = b * (y-z)
dz_dt = -x*y+c*y-z
return [dx_dt, dy_dt, dz_dt]
The constants are "a", "b" and "c". I currently have a list of "a"s for every time-step which I would like to insert at every time-step, when using the scipy ode solver...is this possible?
Thanks!
Yes, this is possible. In the case where a is constant, I guess you called scipy.integrate.odeint(fun, u0, t, args) where fun is defined as in your question, u0 = [x0, y0, z0] is the initial condition, t is a sequence of time points for which to solve for the ODE and args = (a, b, c) are the extra arguments to pass to fun.
In the case where a depends on time, you simply have to reconsider a as a function, for example (given a constant a0):
def a(t):
return a0 * t
Then you will have to modify fun which computes the derivative at each time step to take the previous change into account:
def fun(u, t, a, b, c):
x = u[0]
y = u[1]
z = u[2]
dx_dt = a(t) * x + y * z # A change on this line: a -> a(t)
dy_dt = b * (y - z)
dz_dt = - x * y + c * y - z
return [dx_dt, dy_dt, dz_dt]
Eventually, note that u0, t and args remain unchanged and you can again call scipy.integrate.odeint(fun, u0, t, args).
A word about the correctness of this approach. The performance of the approximation of the numerical integration is affected, I don't know precisely how (no theoretical guarantees) but here is a simple example which works:
import matplotlib.pyplot as plt
import numpy as np
import scipy as sp
import scipy.integrate
tmax = 10.0
def a(t):
if t < tmax / 2.0:
return ((tmax / 2.0) - t) / (tmax / 2.0)
else:
return 1.0
def func(x, t, a):
return - (x - a(t))
x0 = 0.8
t = np.linspace(0.0, tmax, 1000)
args = (a,)
y = sp.integrate.odeint(func, x0, t, args)
fig = plt.figure()
ax = fig.add_subplot(111)
h1, = ax.plot(t, y)
h2, = ax.plot(t, [a(s) for s in t])
ax.legend([h1, h2], ["y", "a"])
ax.set_xlabel("t")
ax.grid()
plt.show()
I Hope this will help you.
No, that is not possible in the literal sense of
"I currently have a list of "a"s for every time-step which I would like to insert at every time-step"
as the solver has adaptive step size control, that is, it will use internal time steps that you have no control over, and each time step uses several evaluations of the function. Thus there is no connection between the solver time steps and the data time steps.
In the extended sense that the given data defines a piecewise constant step function however, there are several approaches to get to a solution.
You can integrate from jump point to jump point, using the ODE function with the constant parameter for this time segment. After that use numpy array operations like concatenate to assemble the full solution.
You can use interpolation functions like numpy.interp or scipy.interpolate.interp1d. The first gives a piecewise linear interpolation, which may not be desired here. The second returns a function object that can be configured to be a "zero-order hold", which is a piecewise constant step function.
You could implement your own logic to go from the time t to the correct values of those parameters. This mostly applies if there is some structure to the data, for instance, if they have the form f(int(t/h)).
Note that the approximation order of the numerical integration is not only bounded by the order of the RK (solve_ivp) or multi-step (odeint) method, but also by the differentiability order of the (parts of) the differential equation. If the ODE is much less smooth than the order of the method, the implicit assumptions for the step size control mechanism are violated, which may result in a very small step size requiring a huge number of integration steps.
I also encountered similar problem. In my case, parameters a, b, and c are not in direct function with time, but determined by x, y, and z at that time. So I have to get x, y, z at time t, and calculate a, b, c for the integration calculation for x, y, z at t+dt. It turns out that if I change dt value, the whole integration result will change dramatically, even to something unreasonable.

Categories