I have been working with piecewise functions in sympy and it does not seem to be integrating them correctly with their context.
Piecewise function
Correct Integration
sympy output
from sympy import *
init_printing(use_unicode=False, wrap_line=False)
x = Symbol('x')
f = Function('f')
a1 = 4
a2 = 8
a3 = 12
G = Piecewise((x,And(x<=a1,x>=0)),(4,And(x>a1,x<=a2)),((x-a3)**2/4,And(x>a2,x<=a3)))
diffeq1= Eq(f(x).diff(x),G)
gg1 = dsolve(diffeq1,f(x))
Am I missing something in my input concerning the creation of the piecewise function or in the differential equation solver?
Edit 2/24/2021
I have included an example of what g(x) would look like with the boundary condition of g(0)=0. Sympy used this boundary condition, with its answer describing the function as 4x between 4<x<8, which is almost there, but it doesn't include the context of what occurs at x = 4. What should occur is g(3.999) ~= g(4.0001), but instead sympy is saying g(3.999) ~= 8 and g(4.0001)~= 16.
Numerically Integrated Method of f(x)
Related
I am trying to solve a physics problem that involves the following equation,
As you can see, sigma could be any function and K1 is a Bessel function. The differential equation has to be solved for Y(x), but x is involved in the integral. In addition, we have another parameter m being a free parameter in the integral and outside the integral. Do you have any idea to solve this?
Sympy does not solve this since it has no analytical solution.
My idea was the following,
I write the integral as a function of x, m.
def get_integral(x,m):
integrand = lambda s: f(s,x,m)
integral = quad(integrand,4*m**2, + np.inf)[0]
return integral
Then write the derivative,
def dYdx(x,Y,m):
x1 = f(x,m))
x2 = get_integral(x,m)
x3 = Y**2-Y_0(x,m)**2
return x1*x2*x3
So I use odeint to solve it like,
solution = odeint(dYdx, y0=0, t = x_array,tfirst=True,args =(m,))
But as you can see I need to solve it for different values of x,s, and m. So that I have an array having the following structure
solution_array = [x,Y,m,integral_value]
for each time I evaluate x and m.
I am interested in solving integro-differential equations that not only contain functions of variables, but also inverse functions. In SymPy a function can be represented by Function:
from sympy import *
x = Symbol('x')
f = Function(x)
However I did not find an inverse function method. While Function has the method inverse, it is more specifically the multiplicative inverse.
Attempting to setup an equation and solve it to obtain a symbolic inverse is not implemented:
from sympy import *
x = Symbol('x')
y = Symbol('y')
f = Function(x)
solve(Eq(y, f))
NotImplementedError:
No algorithms are implemented to solve equation y - f(x)
Is there a way to setup a symbolic inverse of a function in SymPy?
I am trying to solve for non linear equations in python. I have tried using the solver of the Sympy but it doesn't seem to work in a for loop statement. I am tyring to solve for the variable x over a range of inputs [N].
I have attached my code below
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
f_curve_coefficients = [-7.14285714e-02, 1.96333333e+01, 6.85130952e+03]
S = [0.2122, 0, 0]
a2 = f_curve_coefficients[0]
a1 = f_curve_coefficients[1]
a0 = f_curve_coefficients[2]
s2 = S[0]
s1 = S[1]
s0 = S[2]
answer=[]
x = symbols('x')
for N in range(0,2500,5):
solve([a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0])
answer.append(x)
print(answer)
There could be more efficient ways of solving this problem than using sympy * any help will be much appreicated.
Note I am still new to python, after transisitioning from Matlab. I could easliy solve this problem in Matlab and could attach the code, but I am battiling with this in Python
Answering to your question "There could be more efficient ways of solving this problem than using sympy * "
you can use fsolve to find the roots of non linear equation:
fsolve returns the roots of the (non-linear) equations defined by func(x) = 0 given a starting estimate
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html
below is the code:
from scipy.optimize import fsolve
import numpy as np
def f(variables) :
(x,y) = variables
first_eq = 2*x + y - 1
second_eq = x**2 + y**2 - 1
return [first_eq, second_eq]
roots = fsolve(f, (-1 , -1)) # fsolve(equations, X_0)
print(roots)
# [ 0.8 -0.6]
print(np.isclose(f(roots), [0.0, 0.0])) # func(root) should be almost 0.0.
If you prefer sympy you can use nsolve.
>>> nsolve([x+y**2-4, exp(x)+x*y-3], [x, y], [1, 1])
[0.620344523485226]
[1.83838393066159]
The first argument is a list of equations, the second is list of variables and the third is an initial guess.
Also For details, you can checkout similar question asked earlier on stack overflow regarding ways to solve Non-linear equations in python:
How to solve a pair of nonlinear equations using Python?
According to this documentation, the output of solve is the solution. Nothing is assigned to x, that's still just the symbol.
x = symbols('x')
for N in range(0,2500,5):
result = solve(a2*x**2+a1*N*x+a0*N**2-s2*x**2-s1*x-s0-0)
answer.append(result)
I am trying to use sympy to solve an equation for a one dimensional steady state model of the solar wind. I have the code below
from sympy import Eq, var, solve
var('r',real=True)
eq = Eq((1./2.)*((CF**2)/(r))+CT*r**(gamma)+bm/(2.*muo) - CM)
a = solve(eq,r)
Where CF, CT, CM, gamma, muo, and bm are just real numbers. I am trying to solve the equation for r over a range of values for bm but it will not return any numbers. Upon running the block of code, my python notebook just displays that the code is running but doesnt return a value nor does it stop. Is there an alternative function or some sort of command I should be giving to sympy in order to make it work faster?
The equation involves the sum of two powers of r, including r**gamma. Unless gamma is a very small integer (between -4 and 4), there is no hope of solving this symbolically (which is what sympy is for).
To solve it numerically, you need scipy rather than sympy. For example:
from scipy.optimize import fsolve
func = lambda r : (1./2.)*((CF**2)/(r))+CT*r**(gamma)+bm/(2.*muo) - CM
# assign some numeric values to CF, CT, gamma, bm, muo, CM
sol = fsolve(func, 1) # 1 is the initial guess for the solver
I have a range of data that I have approximated using a polynomial of degree 2 in Python. I want to calculate the area underneath this polynomial between 0 and 1.
Is there a calculus, or similar package from numpy that I can use, or should I just make a simple function to integrate these functions?
I'm a little unclear what the best approach for defining mathematical functions is.
Thanks.
If you're integrating only polynomials, you don't need to represent a general mathematical function, use numpy.poly1d, which has an integ method for integration.
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> i = p.integ()
>>> i
poly1d([ 0.66666667, 2. , 6. , 0. ])
>>> integrand = i(1) - i(0) # Use call notation to evaluate a poly1d
>>> integrand
8.6666666666666661
For integrating arbitrary numerical functions, you would use scipy.integrate with normal Python functions for functions. For integrating functions analytically, you would use sympy. It doesn't sound like you want either in this case, especially not the latter.
Look, Ma, no imports!
>>> coeffs = [2., 4., 6.]
>>> sum(coeff / (i+1) for i, coeff in enumerate(reversed(coeffs)))
8.6666666666666661
>>>
Our guarantee: Works for a polynomial of any positive degree or your money back!
Update from our research lab: Guarantee extended; s/positive/non-negative/ :-)
Update Here's the industrial-strength version that is robust in the face of stray ints in the coefficients without having a function call in the loop, and uses neither enumerate() nor reversed() in the setup:
>>> icoeffs = [2, 4, 6]
>>> tot = 0.0
>>> divisor = float(len(icoeffs))
>>> for coeff in icoeffs:
... tot += coeff / divisor
... divisor -= 1.0
...
>>> tot
8.6666666666666661
>>>
It might be overkill to resort to general-purpose numeric integration algorithms for your special case...if you work out the algebra, there's a simple expression that gives you the area.
You have a polynomial of degree 2: f(x) = ax2 + bx + c
You want to find the area under the curve for x in the range [0,1].
The antiderivative F(x) = ax3/3 + bx2/2 + cx + C
The area under the curve from 0 to 1 is: F(1) - F(0) = a/3 + b/2 + c
So if you're only calculating the area for the interval [0,1], you might consider
using this simple expression rather than resorting to the general-purpose methods.
'quad' in scipy.integrate is the general purpose method for integrating functions of a single variable over a definite interval. In a simple case (such as the one described in your question) you pass in your function and the lower and upper limits, respectively. 'quad' returns a tuple comprised of the integral result and an upper bound on the error term.
from scipy import integrate as TG
fnx = lambda x: 3*x**2 + 9*x # some polynomial of degree two
aoc, err = TG.quad(fnx, 0, 1)
[Note: after i posted this i an answer posted before mine, and which represents polynomials using 'poly1d' in Numpy. My scriptlet just above can also accept a polynomial in this form:
import numpy as NP
px = NP.poly1d([2,4,6])
aoc, err = TG.quad(px, 0, 1)
# returns (8.6666666666666661, 9.6219328800846896e-14)
If one is integrating quadratic or cubic polynomials from the get-go, an alternative to deriving the explicit integral expressions is to use Simpson's rule; it is a deep fact that this method exactly integrates polynomials of degree 3 and lower.
To borrow Mike Graham's example (I haven't used Python in a while; apologies if the code looks wonky):
>>> import numpy
>>> p = numpy.poly1d([2, 4, 6])
>>> print p
2
2 x + 4 x + 6
>>> integrand = (1 - 0)(p(0) + 4*p((0 + 1)/2) + p(1))/6
uses Simpson's rule to compute the value of integrand. You can verify for yourself that the method works as advertised.
Of course, I did not simplify the expression for integrand to indicate that the 0 and 1 can be replaced with arbitrary values u and v, and the code will still work for finding the integral of the function from u to v.