I have the following sympy code:
W, k = symbols('W k', real=True)
expr = exp(W)*(exp(I*k) - exp(-I*k))
print(expr)
and I would like sympy to simplify it to:
exp*(W)(2*I*sin(k)
I have tried expr.simplify() and expr.trigsimp() but they don't substitute any trig functions. The only partial solution I was able to find is
expr.rewrite(cos).trigsimp()
but this also expands exp(W) to hyperbolic sine/cosine, which I don't want.
Ok, using
expr.rewrite(cos).simplify()
worked.
expr.expand(complex = True).simplify()
should work.
Related
I am writing a code in which I calculate derivatives and integrals. For that I use sympy and scipy.integrate, respectively. But the use results in a strange erroneous, behaviour. Below is a minimal code that reproduces the behaviour:
from scipy.integrate import quadrature
import sympy
import math
def func(z):
return math.e**(2*z**2)+3*z
z = symbols('z')
f = func(z)
dlogf_dz = sympy.Lambda(z, sympy.log(f).diff(z))
print(dlogf_dz)
print(dlogf_dz(10))
def integral(z):
# WHY DO I HAVE TO USE [0] BELOW ?!?!
d_ARF=dlogf_dz(z[0])
return d_ARF
result = quadrature(integral, 0, 3)
print(result)
>>> Lambda(z, (4.0*2.71828182845905**(2*z**2)*z + 3)/(2.71828182845905**(2*z**2) + 3*z))
>>> 40.0000000000000
>>> (8.97457203290041, 0.00103422711649337)
The first 2 print statements deliver mathematically correct results, but the integration outcome result is wrong - instead of ~8.975 it should be exactly 18. I know this, because I double-checked with WolframAlpha, since dlogf_dz is mathematically quite simple, you can check for yourself here.
Strangely enough, if I just hard-code the math-expression from my first print statement into integral(z), I get the right answer:
def integral(z):
d_ARF=(4.0*2.71828182845905**(2*z**2)*z + 3)/(2.71828182845905**(2*z**2) + 3*z)
return d_ARF
result = quadrature(integral, 0, 3)
print(result)
>>> (18.000000063540558, 1.9408245677254854e-07)
I think the problem is that I don't provide dlogf_dz to integral(z) in a correct way, I have to somehow else define dlogf_dz as a function. Note, that I defined d_ARF=dlogf_dz(z[0]) in integral(z), else the function was giving errors.
What am I doing wrong? How can I fix the code, more precisely make dlogf_dz compatible with sympy integrations? Tnx
In a sympy environment where z is a symbol, I can create a numpy compatible function from a sympy expression using lambdify (not Lambda).
In [22]: expr = E**(2*z**2)+3*z
In [23]: fn = lambdify(z, log(expr).diff(z))
In [24]: quad(fn,0,3)
Out[24]:
lambdify produced this function:
In [25]: help(fn)
Help on function _lambdifygenerated:
_lambdifygenerated(z)
Created with lambdify. Signature:
func(z)
Expression:
(4*z*exp(2*z**2) + 3)/(3*z + exp(2*z**2))
lambdify isn't fool proof but is the best tool for using sympy and scipy together.
https://docs.sympy.org/latest/modules/utilities/lambdify.html
Whenever possible, never mix sympy with other numerical packages. For example, you wrote:
def func(z):
return math.e**(2*z**2)+3*z
Let's replace math.e with sympy.E:
def func(z):
return sympy.E**(2*z**2)+3*z
Now, before integrating with Scipy, since this appears to be a relatively easy expression I would try with SymPy:
dlogf_dz = sympy.log(f).diff(z)
dlogf_dz.integrate((z, 0, 3)).n()
# out: 18.0000001370698
I want to integrate sin((a-b)*x) along x from 0 to pi/4 in sympy. It gives me a piecewise answer for when (a-b)!=0 and (a-b)=0. How do I integrate only for the condition that (a-b)!=0?
I tried the following code with the relational operator but it didn't help.
from sympy import *
a, b, x = symbols("a b x")
Rel(a, b, "ne")
integrate(sin((a-b)*x),(x,pi/4))
You can use conds="none" inside the integrate command:
integrate(sin((a-b)*x),(x,pi/4), conds="none")
Alternatively, you can extract the piece you are interested in from a piecewise result, by exploring its arguments:
res = integrate(sin((a-b)*x),(x,pi/4), conds="none")
final = res.args[0][0]
Edit: Note that the command Rel(a, b, "ne") does nothing. It just creates an unequality that is never being used.
I am using sympy to help automate the process of finding equations of motion for some systems using the Euler Lagrange method. What would really make this easy is if I could define a function q and specify its time derivative qd --> d/dt(q) = qd. Likewise I'd like to specify d/dt(qd) = qdd. This is helpful because as part of the process of finding the equations of motion, I need to take derivatives (with respect to time, q, and qd) of expressions that are functions of q and qd. In the end I'll end up with an equation in terms of q, qd, and qdd and I'd like to be able to either print this neatly or use lambdify to convert this to a neat numpy function for use in a simulation.
Currently I've accomplished this is a very roundabout and annoying way by defining q as a function and qd as the derivative of that function:
q = sympy.Function('q', real=True)(t)
q_diff = diff(q,t)
This is fine for most of the process but then I end up with a messy expression filled with "Derivative (q, t)" and "Derivative (Derivative(q, t), t)" which is hard to wrangle into a neat printing format and difficult to turn into a numpy function using lambdify. My current solution is thus to use the subs function and replace q_diff and diff(q_diff, t) with sympy symbols qd and qdd respectively, which cleans things up and makes manipulating the expression much easier. This seems like a bad hack though and takes tons of time to do for more complicated equations with lots of state variables.
What I'd like is to define a function, q, with a specific value for the time derivative. I'd pass in that value when creating the function, and sympy could treat it like a generic Function object but would use whatever I'd given it for the time derivative instead of just saying "Derivative(q, t)". I'd like something like this:
qdd = sympy.symbols('qdd')
qd = my_func(name='qd', time_deriv=qdd)
q = my_func(name='q', time_deriv=qd)
diff(q, t)
>>> qd
diff(q**2, t)
>>> 2*q*qd
diff(diff(q**2, t))
>>> 2*q*qdd + 2*qd**2
expr = q*qd**2
expr.subs(q, 5)
>>> 5*qd**2
Something like that, where I could still use the subs command and lambdify command to substitute numeric values for q and qd, would be extremely helpful. I've been trying to do this but I don't understand enough of how the base sympy.Function class works to get this going. This is what I have right now:
class func(sp.Function):
def __init__(self, name, deriv):
self.deriv = deriv
self.name = name
def diff(self, *args, **kwargs):
return self.deriv
def fdiff(self, argindex=1):
assert argindex == 1
return self.deriv
This code so far does not really work, I don't know how to specify that specifically the time derivative of q is qd. Right now all derivatives of q are returning q?
I don't know if this is just a really bad solution, if I should be avoiding this issue entirely, or if there's already a clean way to solve this. Any advice would be very appreciated.
If I use functions in SymPy and call the diff method, the commutative property just gets ignored.
h = Function('h',real=True,commutative=False)(t)
R = Function('R',real=True,commutative=False)(t)
print(diff(R*h,t))
# returns:
R(t)*Derivative(h(t), t) + h(t)*Derivative(R(t), t)
Am I doing something wrong here? I just want the output to have R in the front always..
This is arguably a bug in SymPy, which determines the commutativity of a function from its arguments. See also this comment. It's not related to derivatives: simply printing h*R will expose the bug (the expression is presented as R(t)*h(t)).
Until this behavior is changed, it seems the only way to achieve the desired result is to declare t to be noncommutative:
t = Symbol('t', commutative=False)
h = Function('h', real=True)(t)
R = Function('R', real=True)(t)
print(diff(R*h, t))
prints
R(t)*Derivative(h(t), t) + Derivative(R(t), t)*h(t)
For example, see the mathematica code below.
In[1]:= Max[Limit[Sin[x], x -> Infinity]]
Out[1]:= 1
In[2]:= Min[Limit[Sin[x], x -> Infinity]]
Out[2]:= -1
I want to do something like this in python(maybe Sympy).
Using sympy, evaluating limit(sin(x), x, oo) returns sin(oo), and it's not range or something.
So I'm confused how to get the upper/lower limit of it.
I think it's not so hard problem but I can find seldom information on that.
Can I do that in python?
Edit(2015/11/21)
I did some tries on the way #Teepeemm suggested, and still made no fruit.
This is the Try&error's product:
import sympy as sym
from scipy.optimize import differential_evolution
sym.init_printing()
x,y,z,l,u = sym.symbols('x y z l u')
f = sym.simplify(sym.sin(x) * sym.exp(-x))
df = sym.diff(f,x)
numeric_df = sym.lambdify(x, df)
numeric_sup_l_u = differential_evolution(numeric_df, [(l,u)])
sup_l_u = sym.sympify(numeric_sup_l_u["x"])
sup_l = sym.limit(sup_l_u, u, sym.oo)
print sym.simplify(sym.limit(sup_l, l, sym.oo))
The two problems on the way:
Using scipy functions require to lambdify expressions, but once I lambdify expressions, I cannot sympify valuables again.
I don't know how to give diferential_evolution to the range including sympy's valuable or +oo.