Basically not getting the expected result,
I am getting the following
exp(-infinity*sign(a**2)))
I tried the following
from sympy import *
import sympy as sp
x = sp.Symbol('x')
a = sp.Symbol('a')
def f(x,a):
return (a-x)**2/(a**2)
sp.exp(-1*sp.integrate(f(x,a),(x,0,oo)))
Result should be
(3a**2 - 3*a + 1)/(3a**2)
I don't know why you expect a finite result here:
In [20]: Integral((a-x)**2, (x, 0 ,oo))
Out[20]:
∞
⌠
⎮ 2
⎮ (a - x) dx
⌡
0
In [21]: Integral((a-x)**2, (x, 0 ,oo)).doit()
Out[21]: ∞
Related
For example, if I want the Fourier series of the function f(x) = x(π-x) on [0, π], I can calculate the coefficients of the sine series:
which works by considering f(x) in a half range Fourier series, the half interval [0, π] extended to [-π, 0] by taking the extension of f(x) to be an odd function (so I don't need the cosine terms in the full Fourier series expansion).
Using SymPy, however, I get a cosine series:
import sympy as sp
x = sp.symbols('x', real=True)
f = x * (sp.pi - x)
s = sp.fourier_series(f, (x, 0, sp.pi))
s.truncate(4)
-cos(2*x) - cos(4*x)/4 - cos(6*x)/9 + pi**2/6
Even constructing a Piecewise function with the correct parity doesn't work:
p = sp.Piecewise((-f, x < 0), (f, x >= 0))
ps = sp.fourier_series(p, (x, 0, sp.pi))
ps.truncate(4)
-cos(2*x) - cos(4*x)/4 - cos(6*x)/9 + pi**2/6
The cosine series sort-of approximates f(x) but not nearly as well as the sine one. Is there any way to force SymPy to do what I did with paper and pen?
Integrating directly as proposed in the first answer by #Oscar is a simple solution.
It is interesting anyway to understand why your approach is not working.
To get an odd function, for negative x values, we need to use
fneg = x * (sp.pi + x) (= -f(-x))
Then, to calculate the coefficients, you need to integrate from -PI to PI.
Code
import sympy as sp
x = sp.symbols('x', real=True)
f = x * (sp.pi - x)
fneg = x * (sp.pi + x)
p = sp.Piecewise((fneg, x < 0), (f, x >= 0))
ps = sp.fourier_series(p, (x, -sp.pi, sp.pi))
print ('ps = ', ps.truncate(4))
Result:
ps = 8*sin(x)/pi + 8*sin(3*x)/(27*pi) + 8*sin(5*x)/(125*pi) + 8*sin(7*x)/(343*pi)
You can compute the series coefficients directly:
In [52]: f
Out[52]: x⋅(π - x)
In [53]: n = symbols('n', integer=True)
In [54]: bn = 2/pi*integrate(f*sin(n*x), (x, 0, pi))
In [55]: bn
Out[55]:
⎛⎧ n ⎞
⎜⎪ 2⋅(-1) 2 ⎟
⎜⎪- ─────── + ── for n ≠ 0⎟
2⋅⎜⎨ 3 3 ⎟
⎜⎪ n n ⎟
⎜⎪ ⎟
⎝⎩ 0 otherwise⎠
──────────────────────────────
π
In [56]: f_approx = summation(bn*sin(n*x), (n, 0, 5))
In [57]: f_approx
Out[57]:
8⋅sin(x) 8⋅sin(3⋅x) 8⋅sin(5⋅x)
──────── + ────────── + ──────────
π 27⋅π 125⋅π
In [58]: plot(f_approx, f, (x, -pi, 2*pi))
I'm a newbie learning python. I have a question, can you guys help me? This is my code:
from sympy import *
def test(f, g, a):
f1 = f.subs(x, g)
df1 = diff(f1, x).subs(x, a)
return df1
print(test((2*(x**2) + abs(x + 1)), (x - 1), -1))
Result: -Subs(Derivative(re(x), x), x, -1) - 8
I'm taking the derivative of f(g(x)) with: f = 2(x^2) + abs(x + 1), g = x - 1 and x = -1. When I use diff to calculate the result is -Subs(Derivative(re(x), x), x, -1) - 8, but when I use the formula lim x->x0 (f(x) - f(x0))/(x - x0) I got result is -9. I also tried using a calculator to calculate and the result -9 is the correct result. Is there a way to make diff return -9? Anyone have any help or can give some pointers?
Thanks!
Whenever I see a re or im appear when I didn't expect them, I am inclined to make the symbols real:
>>> from sympy import *
>>> def test(f, g, a):
... f1 = f.subs(x, g)
... df1 = diff(f1, x).subs(x, a)
... return df1
...
>>> var('x',real=True)
x
>>> print(test((2*(x**2) + abs(x + 1)), (x - 1), -1))
-9
Since I'm still a relative beginner to sympy I like to view intermediate results (I even like to do that with numpy which I know much better). Running in isympy:
In [6]: diff(f1,x)
Out[6]:
⎛ d d ⎞
⎜re(x)⋅──(re(x)) + im(x)⋅──(im(x))⎟⋅sign(x)
⎝ dx dx ⎠
4⋅x - 4 + ───────────────────────────────────────────
x
That expression contains unevaluate d/dx and the distinction between the real and imaginary parts of x.
Restricting x to real as suggested in the other answer produces:
In [19]: diff(exp,x)
Out[19]: 4⋅x + sign(x + 1)
from sympy import Symbol
x = Symbol('x')
equation = x**2 + 2**x - 2*x - 5**x + 1
Here, in this equation, for example, the polynomial part is x**2 - 2*x + 1 while the non-polynomial part is 2**x - 5**x.
Given an equation, how to extract the polynomial and the non-polynomial parts of it?
You can use the as_poly method to find the terms that are polynomial in the given symbol:
In [1]: from sympy import Symbol
...:
...: x = Symbol('x')
...: equation = x**2 + 2**x - 2*x - 5**x + 1
In [2]: poly, nonpoly = [], []
In [3]: for term in Add.make_args(equation):
...: if term.as_poly(x) is not None:
...: poly.append(term)
...: else:
...: nonpoly.append(term)
...:
In [4]: poly
Out[4]:
⎡ 2 ⎤
⎣1, x , -2⋅x⎦
In [5]: nonpoly
Out[5]:
⎡ x x⎤
⎣2 , -5 ⎦
In [6]: Add(*poly)
Out[6]:
2
x - 2⋅x + 1
In [7]: Add(*nonpoly)
Out[7]:
x x
2 - 5
https://docs.sympy.org/latest/modules/core.html#sympy.core.expr.Expr.as_poly
Separate all the terms first,
lst = equation.args
Use the degree() function in sympy module to find the degree of each term in lst. It gives a PolynomialError, if a term is not a polynomial.
Error can be handled using try ... except statements.
I have an unlinear function from neuroscience. ad is a parameter and t is the time.
def alpha(t, ad):
if t < 0:
return 0
else:
return pow(ad, 2) * t * np.exp(-1 * ad * t)
With ad = 2 It rises from x = 0, increases to 0.7 and becomes near 0 at x=3.
I can find when this function is near or equal to 0 by iterating by intervals. But I just need to know where the function is near or equal to 0. I was wondering if there is any way to find it without iterating ex) intersecting with x = 0 function, or when derivative equals 0 ...
I presume that you want to find when the derivative is equal to zero. You can do that with sympy:
In [12]: from sympy import exp, Symbol, nsolve
In [13]: ad = 2
In [14]: t = Symbol('t')
In [15]: f = pow(ad, 2) * t * exp(-1 * ad * t)
In [16]: f
Out[16]:
-2⋅t
4⋅t⋅ℯ
In [17]: f.diff(t)
Out[17]:
-2⋅t -2⋅t
- 8⋅t⋅ℯ + 4⋅ℯ
In [18]: solve(f.diff(t), t)
Out[18]: [1/2]
EDIT: Your answer below suggests a different question from the one in the OP so I'll update this:
You want to find the zeros of this:
In [5]: ad = Symbol('ad')
In [6]: t = Symbol('t')
In [7]: epsilon = Symbol('epsilon')
In [8]: f = pow(ad, 2) * t * exp(-1 * ad * t) - epsilon
In [9]: f
Out[9]:
2 -ad⋅t
ad ⋅t⋅ℯ - ε
We can solve this analytically using solve:
In [10]: sol, = solve(f, t)
In [11]: sol
Out[11]:
⎛-ε ⎞
-W⎜───⎟
⎝ ad⎠
────────
ad
This answer is given in terms of the Lambert W function. You can substitute for ad and epsilon to get the answer for any particular values:
In [12]: sol.subs({ad:2, epsilon:0.01})
Out[12]: 0.00251259459155665
This is giving you the root near zero though because it's branch W0 of the Lambert W function. The other root is given in by branch W_{-1} and is
In [32]: -LambertW(-epsilon/ad, -1)/ad
Out[32]:
⎛-ε ⎞
-W⎜───, -1⎟
⎝ ad ⎠
────────────
ad
In [28]: (-LambertW(-epsilon/ad, -1)/ad).subs({ad:2, epsilon:0.01}).n()
Out[28]: 3.64199856754954
If you just want to solve for these numerically then you can use nsolve:
In [29]: nsolve(f.subs({ad:2, epsilon:0.01}), t, 0)
Out[29]: 0.00251259459155665
In [30]: nsolve(f.subs({ad:2, epsilon:0.01}), t, 1)
Out[30]: 3.64199856754954
I was able to find a solution using Oscar's answer.
The function was exponential, so it never reaches 0 after 0
I just pulled the function a bit with a minus epsilon
(epsilon is user defined)
There will be two zeros, and I just need the second one.
from sympy import exp, Symbol, solve
epsilon = 0.01
ad = 2
t = Symbol('t')
f = pow(ad, 2) * t * exp(-1 * ad * t) - epsilon
print(solve([f], t, dict=True, quick=True))
#print(solve([t > 0, f], t, dict=True, quick=True)) i get an error if i use this
But this solution to finding a near-zero x took much more time than just iterating by interval of 0.01 over the function.
from sympy import exp, Symbol, solve, Piecewise
import numpy as np
epsilon = 0.01
ad = 2
t = Symbol('t')
f = pow(ad, 2) * t * exp(-1 * ad * t) - epsilon
#print(solve([f], t, dict=True, quick=True))
def alpha(t, ad):
if t < 0:
return 0
else:
return pow(ad, 2) * t * np.exp(-1 * ad * t)
def find(function, farg, arange, epsilon):
xs = []
for x in arange:
if function(x, **farg) < epsilon:
xs.append(x)
return xs
if __name__ == "__main__":
import timeit
print(timeit.timeit("solve([f], t, dict=True)",
setup="from __main__ import solve, f, t",
number=100))
print(timeit.timeit("solve([f], t, dict=True, quick=True)",
setup="from __main__ import solve, f, t",
number=100))
print(timeit.timeit("find(alpha, {'ad':ad}, np.arange(0, 4, 0.01), 0.01)",
setup="from __main__ import find, alpha, ad , t, f, np",
number=100))
Outputs
24.860103617
24.020882552
0.10911201300000073
I'm trying to compute the integral, specifically, the Laplace transform, of a piecewise function.
w_k = k*t, 0 <= t <= 1/k
1, otherwise
So I attempted to integrate it as follows:
from sympy import *
t, k = symbols('t k', positive=True)
w_k = Piecewise((k*t, (0 <= t)&(t <= 1/k)), (1, True))
integrate(w_k * exp(-s*t), (t, 0, oo)).doit()
Out:
∞
⌠
⎮ ⎧ -s⋅t 1
⎮ ⎪k⋅t⋅ℯ for t ≤ ─
⎮ ⎪ k
⎮ ⎨ dt
⎮ ⎪ -s⋅t
⎮ ⎪ ℯ otherwise
⎮ ⎩
⌡
0
Using laplace_transform gives me the same result.
I'm aware that sometimes certain conditions need to be fulfilled for the function to be possible to integrate by SymPy, but I'm really not sure what other than the positive variables is needed for this function. Is there any way to force SymPy to either compute the integral, or explain why it isn't possible?
Try this:
import math
from sympy import *
s = Symbol('s', positive=True)
t = Symbol('t', positive=True)
k = Symbol('k', positive=True)
function = Piecewise((k*t, (0 <= t)&(t <= 1/k)), (1, True))
result = integrate(function * exp(-s*t), (t, 0, math.inf))
Output:
k/s**2 + exp(-s/k)/s + (-k - s)*exp(-s/k)/s**2