SymPy "solves" a differential equation it shouldn't solve - python

Here's what I did:
from sympy import *
x = symbols("x")
y = Function("y")
dsolve(diff(y(x),x) - y(x)**x)
The answer I get (SymPy 1.0) is:
Eq(y(x), (C1 - x*(x - 1))**(1/(-x + 1)))
But that's wrong. Both Mathematica and Maple can't solve this ODE. What's happening here?

A bug. SymPy thinks it's a Bernoulli equation
y' = P(x) * y + Q(x) * y**n
without checking that the exponent n is constant. So the solution is wrong.
I raised an issue on SymPy tracker. It should be soon fixed in the development version of SymPy and subsequently in version 1.2. (As an aside, 1.0 is a bit old, many things have improved in 1.1.1 although not that one.)
With the correction, SymPy recognizes there is no explicit solution and resorts to power series method, producing a few terms of the power series:
Eq(y(x), x + x**2*log(C1)/2 + x**3*(log(C1)**2 + 2/C1)/6 + x**4*(log(C1)**3 + 9*log(C1)/C1 - 3/C1**2)/24 + x**5*(log(C1)**4 + 2*(log(C1) - 1/C1)*log(C1)/C1 + 2*(2*log(C1) - 1/C1)*log(C1)/C1 + 22*log(C1)**2/C1 - 20*log(C1)/C1**2 + 20/C1**2 + 8/C1**3)/120 + C1 + O(x**6))
You don't have to wait for the patch to get this power series, it can be obtained by giving SymPy a "hint":
dsolve(diff(y(x), x) - y(x)**x, hint='1st_power_series')
Works better with an initial condition:
dsolve(diff(y(x), x) - y(x)**x, ics={y(0): 1}, hint='1st_power_series')
returns
Eq(y(x), 1 + x + x**3/3 - x**4/8 + 7*x**5/30 + O(x**6))

Related

Lagrange Multipliers /w sympy

I am currently trying to find the maximum radius of a circle I can manifest between existing circles around it.
i.e. I'm trying to find not only the maximum radius, but the center point most suited for it over a specific given straight line.
In order to find said maxima I'm trying to implement a generalized Lagrange multipliers solution using sympy.
If "n" represents the amount of constraints I have, then I was able to:
Create n symbols generator.
Perform the necessary nth-gradient over the Lagrange function
Manifest the required inequalities (from constraints) to achieve the list of equalities and inequalities needed to be solved.
The code:
from sympy import S
from sympy import *
import sympy as smp
#Lagrange Multipliers
def sympy_distfun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return sqrt((x-cx)**2+(y-cy)**2)-radius
def sympy_circlefun(cx,cy,radius):
x,y=smp.symbols('x y',real=True)
return (x-cx)**2+(y-cy)**2-radius**2
def sympy_linefun(slope,b):
x,y=smp.symbols('x y',real=True)
return slope*x+b-y
def lagrange_multiplier(objective,constraints):
x,y=smp.symbols('x y',real=True)
a=list(smp.symbols('a0:%d'%len(constraints),real=True))
cons=[constraints[i]*a[i] for i in range(len(a))]
L=objective+(-1)*sum(cons)
gradL=[smp.diff(L,var) for var in [x,y]+a]
constraints=[(con)>= 0 for con in constraints]
eqs=gradL+constraints
vars=a+[x,y]
solution=smp.solve(eqs[0],vars)
#solution=smp.solveset(eqs,vars)
print(solution)
line=sympy_linefun(0.66666,-4.3333)
dist=sympy_distfun(11,3,4)
circlefunc1=sympy_circlefun(11,3,4)
circlefunc2=sympy_circlefun(0,0,3)
lagrange_multiplier(dist,[line,circlefunc1,circlefunc2])
But, when using smp.solveset(eqs,vars) I encounter the error message:
ValueError: [-0.66666*a0 - a1*(2*x - 22) - 2*a2*x + (x - 11)/sqrt((x - 11)**2 + (y - 3)**2), a0 - a1*(2*y - 6) - 2*a2*y + (y - 3)/sqrt((x - 11)**2 + (y - 3)**2), -0.66666*x + y + 4.3333, -(x - 11)**2 - (y - 3)**2 + 16, -x**2 - y**2 + 9, 0.66666*x - y - 4.3333 >= 0, (x - 11)**2 + (y - 3)**2 - 16 >= 0, x**2 + y**2 - 9 >= 0] is not a valid SymPy expression
When using: solution=smp.solve(eqs[0],vars) to try and solve one equation, it sends sympy into a CPU crushing frenzy and it obviously fails to complete the calculation. I made sure to declare all variables as real so i fail to see why it takes so long to solve.
Would like to understand what I'm missing when it comes to handling multiple inequalities with sympy, and if there is a more optimized faster way to solve Lagrange multiplication I'd love to give it a try

Are there tricks and tweaks for unevaluated integrals in SymPy?

I try to verify some formula from a paper concerning mechanics by retracing their derivation, involving some integrals like these:
The original source suggests that the solution is possible, but it seems quite tricky. I tried my best with SymPy (also with Maxima and Mathematica) fiddling around with different assumptions and simplifications, without any success:
import sympy as sp
from IPython.display import display, Math
x = sp.symbols('x', nonnegative=True, real=True, finite=True)
a = sp.symbols('a', positive=True, real=True, finite=True)
b = sp.symbols('b', positive=True, real=True, finite=True)
# print("assumptions for x:", x.assumptions0)
# print("assumptions for a, b:", r.assumptions0)
expr = x/(b + 2*(a-sp.sqrt(x)*sp.sqrt(2*a-x)))**3
expr = sp.expand(expr)
display(Math("e = "+sp.latex(expr)))
integ = sp.integrate(expr, (x, 0, a))
display(Math("integ = "+sp.latex(integ)))
SymPy returns my integrals unevaluated. Are there any other tricks and tweaks I could try to get a solution? The parameter a and b are physical dimensions (finite positive real numbers).
The answer found using Rubi is
(a^2*(2*a + b)^4*(Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]*Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]*
(Sqrt[b]*(-2*a + b)*Sqrt[-(4*a + b)^2] + 6*a*Sqrt[-4*a - b]*(2*a + b)*ArcTan[(2*a)/(Sqrt[b]*Sqrt[4*a + b])]) -
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]] +
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]]))/
(2*(2*a - Sqrt[-4*a - b]*Sqrt[b])^(5/2)*(2*a + Sqrt[-4*a - b]*Sqrt[b])^(5/2)*Sqrt[-4*a - b]*b^(5/2)*(4*a + b)^(5

Python: Intersection of two equations

I have the following equations:
sqrt((x0 - x)^2 + (y0 - y)^2) - sqrt((x1 - x)^2 + (y1 - y)^2) = c1
sqrt((x3 - x)^2 + (y3 - y)^2) - sqrt((x4 - x)^2 + (y4 - y)^2) = c2
And I would like to find the intersection. I tried using fsolve, and transforming the equations into linear f(x) functions, and it worked for small numbers. I am working with huge numbers and to solve the linear equation there are lots of calculations performed, specifically the calculations reach to a square root of a subtraction, and when handling huge numbers precision is lost, and the left operand is smaller than the right one getting to a math value domain error trying to solve the square root of a negative number.
I am trying to solve this issue in different manners:
Trying to use bigger precision floats. Tried using numpy.float128 but fsolve wont allow using this.
Currently searching for a library that allows to solve non linear equations system, but no luck so far.
Any help/guidance/tip I will appreciate!!
Thanks!!
Taking all advice, i ended using code like the following:
for the the system:
0 = x + y - 8
0 = sqrt((-6 - x)^2 + (4 - y)^2) - sqrt((1 - x)^2 + y^) - 5
from math import sqrt
import numpy as np
from scipy.optimize import fsolve
def f(x):
y = np.zeros(2)
y[0] = x[1] + x[0] - 8
y[1] = sqrt((-6 - x[0]) ** 2 + (4 - x[1]) ** 2) - sqrt((1 - x[0]) ** 2 + x[1] ** 2) - 5
return y
x0 = np.array([0, 0])
solution = fsolve(f, x0)
print "(x, y) = (" + str(solution[0]) + ", " + str(solution[1]) + ")"
Note: the line x0 = np.array([0, 0]) corresponds to the seed that the method uses in fsolve in order to get to a solution. It is important to have a close seed to reach for a solution.
The example provided works :)
You might find some use in SymPy, which is a symbolic algebra manipulation in Python.
From it's home page:
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
As you have a non-linear equation you need some kind of optimizer to solve it. Probably you can use something like scipy.optimize (https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html). However, as I have no experience with that scipy function can offer you only a solution with the gradient descent method of the tensorflow library. You can find a short guide here: https://learningtensorflow.com/lesson7/ (check out the Gradient descent cahpter). Analog to the method described there you could do something like that:
# These arrays are pseudo code, fill in your values for x0,x1,y0,y1,...
x_array = [x0,x1,x3,x4]
y_array = [y0,y1,y3,y4]
c_array = [c1,c2]
# Tensorflow model starts here
x=tf.placeholder("float")
y=tf.placeholder("float")
z=tf.placeholder("float")
# the array [0,0] are initial guesses for the "correct" x and y that solves the equation
xy_array = tf.Variable([0,0], name="xy_array")
x0 = tf.constant(x_array[0], name="x0")
x1 = tf.constant(x_array[1], name="x1")
x3 = tf.constant(x_array[2], name="x3")
x4 = tf.constant(x_array[3], name="x4")
y0 = tf.constant(y_array[0], name="y0")
y1 = tf.constant(y_array[1], name="y1")
y3 = tf.constant(y_array[2], name="y3")
y4 = tf.constant(y_array[3], name="y4")
c1 = tf.constant(c_array[0], name="c1")
c2 = tf.constant(c_array[1], name="c2")
# I took your first line and subtracted c1 from it, same for the second line, and introduced d_1 and d_2
d_1 = tf.sqrt(tf.square(x0 - xy_array[0])+tf.square(y0 - xy_array[1])) - tf.sqrt(tf.square(x1 - xy_array[0])+tf.square(y1 - xy_array[1])) - c_1
d_2 = tf.sqrt(tf.square(x3 - xy_array[0])+tf.square(y3 - xy_array[1])) - tf.sqrt(tf.square(x4 - xy_array[0])+tf.square(y4 - xy_array[1])) - c_2
# this z_model should actually be zero in the end, in that case there is an intersection
z_model = d_1 - d_2
error = tf.square(z-z_model)
# you can try different values for the "learning rate", here 0.01
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(error)
model = tf.global_variables_initializer()
with tf.Session() as session:
session.run(model)
# here you are creating a "training set" of size 1000, you can also make it bigger if you like
for i in range(1000):
x_value = np.random.rand()
y_value = np.random.rand()
d1_value = np.sqrt(np.square(x_array[0]-x_value)+np.square(y_array[0]-y_value)) - np.sqrt(np.square(x_array[1]-x_value)+np.square(y_array[1]-y_value)) - c_array[0]
d2_value = np.sqrt(np.square(x_array[2]-x_value)+np.square(y_array[2]-y_value)) - np.sqrt(np.square(x_array[3]-x_value)+np.square(y_array[3]-y_value)) - c_array[1]
z_value = d1_value - d2_value
session.run(train_op, feed_dict={x: x_value, y: y_value, z: z_value})
xy_value = session.run(xy_array)
print("Predicted model: {a:.3f}x + {b:.3f}".format(a=xy_value[0], b=xy_value[1]))
But be aware: This code will probably run a while... This is why haven't tested it...
Also I am currently not sure what will happen if there is no intersection. Probably you get the coordinates of the closest distance of the functions...
Tensorflow can be somewhat difficult if you haven't used it yet, but it is worth to learn it, as you can also use it for any deep learning application (actual purpose of this library).

Error in sympy.solve on Freudenstein equation

I'm trying to obtain one of the angles of Freudenstein equation (psi):
k1 * cos(psi) - k2 * cos(fi) + k3 - cos(psi - fi) = 0
I have k1, k2, k3 and fi values. I tried the following:
from sympy import Symbol, solve, cos
x = Symbol('x')
realPsi = solve(k1 * cos(x) - k2 * cos(fi) + k3 - cos(x - fi), x)
I receive this error:
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 484, in solve solution = _solve(f, *symbols, **flags)
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 700, in _solve soln = tsolve(f_num, symbol)
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 1143, in tsolve "(tsolve: at least one Function expected at this point")
NotImplementedError: Unable to solve the equation(tsolve: at least one Function expected at this point
I don't use this kind of tools before, maybe I'm doing something really wrong...
Any idea?
Thanks,
Héctor.
EDIT:
Thanks for the fast response.
I tried the following (simple equation with cos):
eq = 3.2 * cos(x + 0.2).rewrite(exp) + 1.7
eq
Out[1]: 1.6*exp(I*(-x - 0.2)) + 1.6*exp(I*(x + 0.2)) + 1.7
solve(1.6*exp(I*(-x - 0.2)) + 1.6*exp(I*(x + 0.2)) + 1.7, x)
NotImplementedError: Unable to solve the equation(tsolve: at least one Function expected at this point
Am I using correctly .rewrite??
Of course it should "just work" but here is a case where, with a little help for the "simpler equation with cos" given above you can get an answer:
>>> eq=3.2*cos(x+.2)+1.7
>>> [w.n(3,chop=True) for w in solve(expand(eq.rewrite(exp)))]
[-2.33, 1.93]
NotImplementedError means what it says, namely that a solver for this type of equations is "not implemented".
You can help SymPy a bit to find the solution:
>>> k * cos(x) - m * cos(y) + n - cos(x - y)
k*cos(x) - m*cos(y) + n - cos(x - y)
>>> _.rewrite(exp)
k*(exp(I*x)/2 + exp(-I*x)/2) - .....
>>> solve(_, x)
..... long solution
You can use rewrite to transform expressions written with trigonometric functions into expressions containing complex exponentials.

Equation solving in Python

I am trying to solve equations such as the following for x:
Here the alpha's and K are given, and N will be upwards of 1,000. Is there a way to specify the LHS given an np.array for the alpha's using sympy? My hope was to define:
eqn = Eq(LHR - K)
solve(eqn,x)
by telling sympy that LHS= sum( a_i + x).
Any tips on solvers which would do this the fastest would also be appreciated. Thanks!
I was hoping for something like:
from sympy import Symbol, symbols, solve, summation, log
import numpy as np
N=10
K=1
alpha=np.random.randn(N, 1)
x = Symbol('x')
i = Symbol('i')
eqn = summation(log(x+alpha[i]), (i, 1, N))
solve(eqn-K,x)
You can't index a NumPy array with a SymPy symbol. Since your sum is finite, just use the Python sum function:
>>> alpha=np.random.randn(1, N)
>>> sum([log(x + i) for i in alpha[0]])
log(x - 1.85289943713841) + log(x - 1.40121781484552) + log(x - 1.21850393539695) + log(x - 0.605693136420962) + log(x - 0.575839713282035) + log(x - 0.105389419698408) + log(x + 0.415055726774043) + log(x + 0.71601559149345) + log(x + 0.866995633213984) + log(x + 1.12521825562504)
But even so, I don't get why you don't just rewrite this as (x - alpha[0])*(x - alpha[1])*...*(x - alpha[N - 1]) - exp(K), as suggested by Warren Weckesser. You can then use a numerical solver like SymPy's nsolve or something from another library to solve this numerically
>>> nsolve(Mul(*[(x - i) for i in alpha[0]]) - exp(K), 1)
mpf('1.2696755961730152')
You could also solve the log expression numerically, but unless your logs can have negative arguments, these should be the same.

Categories