I have a problem like this f(u(x,y), v(x,y)) = 0. For a simple example we could choose f=u^2*v. I want to perturb the state with u=u_0+du,v=v_0+dv.
Just doing it in sympy like:
import sympy as sp
x, y = sp.symbols("x, y")
u0, v0 = sp.symbols("u_0, v_0")
du = sp.Function("\\delta u")(x, y)
dv = sp.Function("\\delta v")(x, y)
u = u0 + du
v = v0 + dv
f = u**2 * v
print(sp.expand(u**2 * v))
I get u_0**2*v_0 + u_0**2*\delta v(x, y) + 2*u_0*v_0*\delta u(x, y) + 2*u_0*\delta u(x, y)*\delta v(x, y) + v_0*\delta u(x, y)**2 + \delta u(x, y)**2*\delta v(x, y)
Is there a way to tell sympy that any product of deltas can be zero?
I tried using sp.Order but it doesn't work where the series is a power of some function
Thanks
I usually approach these tasks from an expression manipulation point of view: knowing what I am looking for I seek to make that change (as opposed to letting something like series do what it should do). In this case you are looking for terms that have a product of functions. Such terms could be a Power (like f**2) or a Mul (like x*f1*f2). So something like this could be done:
def funcpow(x):
rv = 0
if x.is_Mul or x.is_Pow:
for i in Mul.make_args(x):
b,e=i.as_base_exp()
if b.is_Function:
rv += e
return rv
>>> eq.replace(lambda x: funcpow(x)>1, lambda x: 0)
u_0**2*v_0 + u_0**2*\\delta v(x, y) + 2*u_0*v_0*\\delta u(x, y)
Often, the conditions are placed in the call to replace but since there are several I made a helper function instead.
Related
Imagine we have a function like f(xy), how can one can take derivatives with respect to xy in python?
I tried to rename u=x*y and take derivative with respect to u, but it apparently doesn't work.
from sympy import symbols, diff
x, y, z = symbols('x y z', real=True)
f = 4*x*y + x*sin(z) + x**3 + z**8*y
u=x*y
diff(f, x)
and for one step further how can we do that if we don't have the exact definition of f(x*y)?
thank you.
I do the following to generate an expression in Sympy:
Create some matrix Q_{ij} which holds some functions \eta, \mu, \nu, of x and y.
Sum over indices and take some partial derivatives.
Substitute some simple expressions for \eta, \mu, and \nu (say \sin(x)*\cos(y)).
Try to simplify the expression so that it explicitly calculates the partial derivatives of the simple expressions.
When I do this, it gives me the following error:
NotImplementedError: Improve MV Derivative support in collect
The specific code that I used is:
from sympy import *
# Set up system and generate functions
x, y, z = symbols('x y z')
i, j, k, m, p = symbols('i j k m p')
xi = Matrix([x, y, z])
lims = range(0, 3)
eta = Function('eta', real=True)(x, y)
mu = Function('mu', real=True)(x, y)
nu = Function('nu', real=True)(x, y)
Q = Matrix([[2/sqrt(3)*eta, nu, 0],
[nu, -1/sqrt(3)*eta + mu, 0],
[0, 0, -1/sqrt(3)*eta - mu]])
# Create complicated expression of partial derivatives
Phi_L1 = -sum(Eijk(3, p + 1, i + 1)
*diff(diff(diff(Q[k, l], xi[j]), xi[j]), xi[p])
*diff(Q[k, l], xi[i])
for i in lims
for j in lims
for k in lims
for l in lims
for p in lims
)
Phi_L1 = simplify(Phi_L1)
# Choose example functions and try to evaluate expression explicitly
eta1 = sin(x)*cos(y)
mu1 = sinh(x)*cosh(y)
nu1 = x**2*y**2
expr1 = Phi_L1.subs(eta, eta1).subs(mu, mu1).subs(nu, nu1)
simplify(expr1)
I couldn't find a simpler example that gives the same error. For example, the following works as intended:
f = Function('f', real=True)(x, y)
expr = diff(diff(f, x), y)
simplify(expr.subs(f, sinh(x)*cosh(y)))
'collect' and 'simplify' have known problems with higher order partial derivatives https://github.com/sympy/sympy/issues/9068 outlines the issue
The example shown is
import sympy as sp
x, y = sp.symbols("x y")
f, g = sp.symbols("f g", cls=sp.Function, args=(x,y))
f, g = f(), g()
expr1 = g.diff(x, 2) + f.diff(x, 2) + 5*f.diff(x, 2) + 2*f.diff(x) + 9*f.diff(x)
expr2 = g.diff(x, 2) + f.diff(x, 2) + 5*f.diff(x, 2) + 2*f.diff(x) + 9*f.diff(x) + g.diff(x).diff(y) + f.diff(x).diff(y)
which works correctly and show the expected output for both expr1 and expo2
sp.collect(expr1, f) works wonderfully but sp.collect(expr2, f) fails with the known error as the implementation is not finished...
This does not address the error (which is apparently a bug), but one work around for the case of explicitly taking derivatives of simple functions is to just use the doit() function. In this case, that would look like (following the code posted in the question):
simplify(expr1.doit())
I am having some trouble with this question. I am given this system of equations
dx / dt = -y -z
dy / dt = x + a * y
dz / dt = b + z * (x - c)
and default values a=0.1, b=0.1, c=14 and also the Runge-Kutta algorithm:
def rk4(f, xvinit, Tmax, N):
T = np.linspace(0,Tmax,N+1)
xv = np.zeros( (len(T), len(xvinit)) )
xv[0] = xvinit
h = Tmax / N
for i in range(N):
k1 = f(xv[i])
k2 = f(xv[i] + h/2.0*k1)
k3 = f(xv[i] + h/2.0*k2)
k4 = f(xv[i] + h*k3)
xv[i+1] = xv[i] + h/6.0 *( k1 + 2*k2 + 2*k3 + k4)
return T, xv
I need to solve this system from t=0 to t=100 in time steps of 0.1 and using initial conditions (𝑥0,𝑦0,𝑧0)=(0,0,0) at 𝑡=0
I'm not really sure where to begin on this, I've tried defining a function to give the Oscillator:
def roessler(xyx, a=0.1, b=0.1, c=14):
xyx=(x,y,x)
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return dxdt ,dydt ,dzdt
which returns the right side of the equation with default values, i've then tried to solve by replacing f with roessler and filling in values for xvinit,Tmax and N with values i'm given but it's not working.
Any help is appreciated sorry if some of this is formatted wrong i'm new here.
Well, you almost got it already. Changing your roessler function to the following
def roessler(xyx, a=0.1, b=0.1, c=14):
x, y, z = xyx
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return np.array([dxdt, dydt, dzdt])
and then calling
T, sol = rk4(roessler, np.array([0, 0, 0]), 100, 1000)
makes it work.
Taking aside the typo in the first line of your roessler function, the key to solving this is to understand that you have a system of differential equations, i.e., you need to work with vectors. While you already had the input as the vector correct, you also need to make the output of roessler a vector and put in the initial value with the appropriate shape.
I want to work with generic functions as long as possible, and only substitute functions at the end.
I'd like to define a function as the derivative of another one, define a generic expression with the function and its derivative, and substitute the function at the end.
Right now my attempts is as follows, but I get the error 'Derivative' object is not callable:
from sympy import Function
x, y, z = symbols('x y z')
f = Function('f')
df = f(x).diff(x) # <<< I'd like this to be a function of dummy variable x
expr = f(x) * df(z) + df(y) + df(0) # df is unfortunately not callable
# At the end, substitute with example function
expr.replace(f, Lambda(X, cos(X))) # should return: -cos(x)*sin(z) - sin(y) - sin(0)
I think I got it to work with integrals as follows:
I= Lambda( x, integrate( f(y), (y, 0, x))) but that won't work for derivatives.
If that helps, I'm fine restricting myself to functions of a single variable for now.
As a bonus, I'd like to get this to work with any combination (products, derivatives, integrals) of the original function.
It's pretty disappointing that f.diff(x) doesn't work, as you say. Maybe someone will create support it sometime in the future. In the mean time, there are 2 ways to go about it: either substitute x for your y, z, ... OR lambdify df.
I think the first option will work more consistently in the long run (for example, if you decide to extend to multivariate calculus). But the expr in second option is far more natural.
Using substitution:
from sympy import *
x, y, z = symbols('x y z')
X = Symbol('X')
f = Function('f')
df = f(x).diff(x)
expr = f(x) * df.subs(x, z) + df.subs(x, y) + df.subs(x, 0)
print(expr.replace(f, Lambda(X, cos(X))).doit())
Lambdifying df:
from sympy import *
x, y, z = symbols('x y z')
X = Symbol('X')
f = Function('f')
df = lambda t: f(t).diff(t) if isinstance(t, Symbol) else f(X).diff(X).subs(X, t)
expr = f(x) * df(z) + df(y) + df(0)
print(expr.replace(f, Lambda(X, cos(X))).doit())
Both give the desired output.
Let's suppose that I have two transcendental functions f(x, y) = 0 and g(a, b) = 0.
a and b depend on y, so if I could solve the first equation analytically for y, y = f(x), I could have the second function depending only on x and thus solving it numerically.
I prefer to use python, but if matlab is able to handle this is ok for me.
Is there a way to solve analytically trascendent functions for a variable with python/matlab? Taylor is fine too, as long as I can choose the order of approximation.
I tried running this through Sympy like so:
import sympy
j, k, m, x, y = sympy.symbols("j k m x y")
eq = sympy.Eq(k * sympy.tan(y) + j * sympy.tan(sympy.asin(sympy.sin(y) / x)), m)
eq.simplify()
which turned your equation into
Eq(m, j*sin(y)/(x*sqrt(1 - sin(y)**2/x**2)) + k*tan(y))
which, after a bit more poking, gives us
k * tan(y) + j * sin(y) / sqrt(x**2 - sin(y)**2) == m
We can find an expression for x(y) like
sympy.solve(eq, x)
which returns
[-sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2),
sqrt(j**2*sin(y)**2/(k*tan(y) - m)**2 + sin(y)**2)]
but an analytic solution for y(x) fails.