I try to verify some formula from a paper concerning mechanics by retracing their derivation, involving some integrals like these:
The original source suggests that the solution is possible, but it seems quite tricky. I tried my best with SymPy (also with Maxima and Mathematica) fiddling around with different assumptions and simplifications, without any success:
import sympy as sp
from IPython.display import display, Math
x = sp.symbols('x', nonnegative=True, real=True, finite=True)
a = sp.symbols('a', positive=True, real=True, finite=True)
b = sp.symbols('b', positive=True, real=True, finite=True)
# print("assumptions for x:", x.assumptions0)
# print("assumptions for a, b:", r.assumptions0)
expr = x/(b + 2*(a-sp.sqrt(x)*sp.sqrt(2*a-x)))**3
expr = sp.expand(expr)
display(Math("e = "+sp.latex(expr)))
integ = sp.integrate(expr, (x, 0, a))
display(Math("integ = "+sp.latex(integ)))
SymPy returns my integrals unevaluated. Are there any other tricks and tweaks I could try to get a solution? The parameter a and b are physical dimensions (finite positive real numbers).
The answer found using Rubi is
(a^2*(2*a + b)^4*(Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]*Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]*
(Sqrt[b]*(-2*a + b)*Sqrt[-(4*a + b)^2] + 6*a*Sqrt[-4*a - b]*(2*a + b)*ArcTan[(2*a)/(Sqrt[b]*Sqrt[4*a + b])]) -
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]] +
6*a*(2*a + b)^2*Sqrt[4*a + b]*ArcTanh[Sqrt[2*a + Sqrt[-4*a - b]*Sqrt[b]]/Sqrt[2*a - Sqrt[-4*a - b]*Sqrt[b]]]))/
(2*(2*a - Sqrt[-4*a - b]*Sqrt[b])^(5/2)*(2*a + Sqrt[-4*a - b]*Sqrt[b])^(5/2)*Sqrt[-4*a - b]*b^(5/2)*(4*a + b)^(5
Related
Is it possible to solve Cubic equation without using sympy?
Example:
import sympy as sp
xp = 30
num = xp + 4.44
sp.var('x, a, b, c, d')
Sol3 = sp.solve(0.0509 * x ** 3 + 0.0192 * x ** 2 + 3.68 * x - num, x)
The result is:
[6.07118098358257, -3.2241955998463 - 10.0524891203436*I, -3.2241955998463 + 10.0524891203436*I]
But I want to find a way to do it with numpy or without 3 part lib at all
I tried with numpy:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, --4.44]
print(np.roots(coeff))
But the result is :
[ 0.40668245+8.54994773j 0.40668245-8.54994773j -1.19057511+0.j]
In your numpy method you are making two slight mistakes with the final coefficient.
In the SymPy example your last coefficient is - num, this is, according to your code: -num = - (xp + 4.44) = -(30 + 4.44) = -34.44
In your NumPy example yout last coefficient is --4.44, which is 4.44 and does not equal -34.33.
If you edit the NumPy code you will get:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, -34.44]
print(np.roots(coeff))
[-3.2241956 +10.05248912j -3.2241956 -10.05248912j
6.07118098 +0.j ]
The answer are thus the same (note that NumPy uses j to indicate a complex number. SymPy used I)
You could implement the cubic formula
this Youtube video from mathologer could help understand it.
Based on that, the cubic function for ax^3 + bx^2 + cx + d = 0 can be written like this:
def cubic(a,b,c,d):
n = -b**3/27/a**3 + b*c/6/a**2 - d/2/a
s = (n**2 + (c/3/a - b**2/9/a**2)**3)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3) - b/3/a
r1 = (n+s)**(1/3)+(n+s)**(1/3) - b/3/a
r2 = (n-s)**(1/3)+(n-s)**(1/3) - b/3/a
return (r0,r1,r2)
The simplified version of the formula only needs to get c and d as parameters (aka p and q) and can be implemented like this:
def cubic(p,q):
n = -q/2
s = (q*q/4+p**3/27)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3)
r1 = (n+s)**(1/3)+(n+s)**(1/3)
r2 = (n-s)**(1/3)+(n-s)**(1/3)
return (r0,r1,r2)
print(cubic(-15,-126))
(5.999999999999999, 9.999999999999998, 2.0)
I'll let you mix in complex number operations to properly get all 3 roots
i have an assignment on sympy and am struggling with the following question:
"Prove with the help of Sympy that 4*(x2 + y2 -ax)3 = 27a2(x2+y2)2 can be written using r = 4a*cos(theta/3)3".
I have tried to substitute x = r*cos(theta) and y = r*sin(theta).
Then I tried sp.solveset(eq, r) but I only got a very longset of {}, nothing like the given polar equation.
Does anyone know how to do this (I can use sympy and numpy)?
The following code builds the equation from its left hand side and right hand side. Then the change of variables to polar coordinates is performed using substitution.
The resulting trigonometric expression is then simplified, and it turns out to be zero after simplification. So any pair/tuple (x,y)=(r*cos(theta),r*sin(theta)) is a solution.
from sympy import *
a,x,y,theta = symbols('a x y \Theta', real=True)
init_printing(use_latex=True)
lhs = 4 * (x**2 + y**2 - a*x) ** 3
rhs = 27 * a**2 * (x**2 + y**2)**2
f = lhs - rhs
r = 4 * a * cos(theta/3)**3
display(f,"----")
f = f.subs(x,r*cos(theta))
f = f.subs(y,r*sin(theta))
display(f,"----")
f1 = f
display(simplify(f))
# format for wolframalpha
t = symbols('t')
f1 = f1.subs(theta,t)
import re
f1 = re.sub("\*\*","^",str(f1))
print("----")
print("wolframalpha expression: solve ", str(f1)," over the reals")
To double-check this, at the end, a wolframalpha query is also generated, which confirms the solutions.
Here's what I did:
from sympy import *
x = symbols("x")
y = Function("y")
dsolve(diff(y(x),x) - y(x)**x)
The answer I get (SymPy 1.0) is:
Eq(y(x), (C1 - x*(x - 1))**(1/(-x + 1)))
But that's wrong. Both Mathematica and Maple can't solve this ODE. What's happening here?
A bug. SymPy thinks it's a Bernoulli equation
y' = P(x) * y + Q(x) * y**n
without checking that the exponent n is constant. So the solution is wrong.
I raised an issue on SymPy tracker. It should be soon fixed in the development version of SymPy and subsequently in version 1.2. (As an aside, 1.0 is a bit old, many things have improved in 1.1.1 although not that one.)
With the correction, SymPy recognizes there is no explicit solution and resorts to power series method, producing a few terms of the power series:
Eq(y(x), x + x**2*log(C1)/2 + x**3*(log(C1)**2 + 2/C1)/6 + x**4*(log(C1)**3 + 9*log(C1)/C1 - 3/C1**2)/24 + x**5*(log(C1)**4 + 2*(log(C1) - 1/C1)*log(C1)/C1 + 2*(2*log(C1) - 1/C1)*log(C1)/C1 + 22*log(C1)**2/C1 - 20*log(C1)/C1**2 + 20/C1**2 + 8/C1**3)/120 + C1 + O(x**6))
You don't have to wait for the patch to get this power series, it can be obtained by giving SymPy a "hint":
dsolve(diff(y(x), x) - y(x)**x, hint='1st_power_series')
Works better with an initial condition:
dsolve(diff(y(x), x) - y(x)**x, ics={y(0): 1}, hint='1st_power_series')
returns
Eq(y(x), 1 + x + x**3/3 - x**4/8 + 7*x**5/30 + O(x**6))
I'm having a big issue with the inverse laplace transform in sympy when I try to antitransform (s + 0.2)/(s*(s + 0.2) + 1).
The code I'm using:
from sympy import *
s = symbols('s')
t = symbols('t')
f = (s + 0.2)/(s*(s + 0.2) + 1)
inverse_laplace_transform(f,s,t)
That code breaks the program and I don't understand why, because with others functions the method doesn't crash down.
Apparently there is a problem with the mpmath, that is a module for arbitrary precision arithmetic, or with the integration of this module with sympy.
You can circumvent the bug by replacing 0.2 by a rational. Something like this worked for me:
from sympy import *
s = symbols('s')
t = symbols('t')
a = Rational(1, 5)
f = (s + a)/(s*(s + a) + 1)
inverse_laplace_transform(f,s,t)
That returns:
sqrt(11)*(sin(3*sqrt(11)*t/10) + 3*sqrt(11)*cos(3*sqrt(11)*t/10))*exp(-t/10)*Heaviside(t)/33
I am trying to solve equations such as the following for x:
Here the alpha's and K are given, and N will be upwards of 1,000. Is there a way to specify the LHS given an np.array for the alpha's using sympy? My hope was to define:
eqn = Eq(LHR - K)
solve(eqn,x)
by telling sympy that LHS= sum( a_i + x).
Any tips on solvers which would do this the fastest would also be appreciated. Thanks!
I was hoping for something like:
from sympy import Symbol, symbols, solve, summation, log
import numpy as np
N=10
K=1
alpha=np.random.randn(N, 1)
x = Symbol('x')
i = Symbol('i')
eqn = summation(log(x+alpha[i]), (i, 1, N))
solve(eqn-K,x)
You can't index a NumPy array with a SymPy symbol. Since your sum is finite, just use the Python sum function:
>>> alpha=np.random.randn(1, N)
>>> sum([log(x + i) for i in alpha[0]])
log(x - 1.85289943713841) + log(x - 1.40121781484552) + log(x - 1.21850393539695) + log(x - 0.605693136420962) + log(x - 0.575839713282035) + log(x - 0.105389419698408) + log(x + 0.415055726774043) + log(x + 0.71601559149345) + log(x + 0.866995633213984) + log(x + 1.12521825562504)
But even so, I don't get why you don't just rewrite this as (x - alpha[0])*(x - alpha[1])*...*(x - alpha[N - 1]) - exp(K), as suggested by Warren Weckesser. You can then use a numerical solver like SymPy's nsolve or something from another library to solve this numerically
>>> nsolve(Mul(*[(x - i) for i in alpha[0]]) - exp(K), 1)
mpf('1.2696755961730152')
You could also solve the log expression numerically, but unless your logs can have negative arguments, these should be the same.