Python symbolic integration - python

I am using symbolic integration to integrate a combined function of circular function and power function.
from sympy import *
import math
import numpy as np
t = Symbol('t')
integrate(0.000671813*(7/2*(1.22222222+sin(2*math.pi*t-math.pi/2))-6)**0.33516,t)
However, when I finished input, it gives me an odd result:
0.000671813*Integral((3.0*sin(6.28318530717959*t - 1.5707963267949) - 2.33333334)**0.33516, t)
Why does this result contain Integral()? I checked online other functions and there is no Integral() in them.

An unevaluated Integral answer means that SymPy was unable to compute the integral.

Essentially you are trying to integrate a function that looks like
(sin(t) + a)**0.33516
where a is a constant number.
In general such an integration is not possible to express in elementary functions; see, for example, http://www.sosmath.com/calculus/integration/fant/fant.html,
especially the sentence on Chebyshev's theorem.

Related

Numerical Solver in Python is not able to find a solution

I broke my problem down as follows. I am not able to solve the following equation with Python 3.9 in a meaningful way, instead it always stops with the initial_guess for small lambda_ < 1. Is there an alternative algorithm that can handle the error function better? Or can I force fsolve to search until a solution is found?
import numpy as np
from scipy.special import erfcinv, erfc
from scipy.optimize import root, fsolve
def Q(x):
return 0.5*erfc(x/np.sqrt(2))
def Qinvers(x):
return np.sqrt(2)*erfcinv(2*x)
def epseqn(epsilon2):
lambda_ = 0.1
return Q(lambda_*Qinvers(epsilon2))
eps1 = fsolve(epseqn, 1e-2)
print(eps1)
I tried root and fsolve to get a solution. Especially for the gaussian error function I do not find a solution that converges.
root and fsolve can be used to find the roots of a function defined by f(x)=0. Since your outer function, which is basically erfc(x), has no root (it only it approaches the x-axis asymptotically from positive values) the solvers are not able to find one. Real function arguments are assumed like you did.
Before blindly starting with numerical calculations, I would recommend to think about any constraints of your function.
You will find out, that your function is only defined for values between zero and one. If you assume that there is only a single root in this interval, I would recommend to use an interval search method like brentq, see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brentq.html#scipy.optimize.brentq and https://en.wikipedia.org/wiki/Brent%27s_method.
However, you could instead think further and/or just plot your function, e.g. using matplotlib
import matplotlib.pyplot as plt
x = np.linspace(0, 1, 1000)
y = epseqn(x)
plt.plot(x, y)
plt.show()
There you will see that the root is at zero, which makes sense when looking at your functions, because the inverse cumulative error function is minus infinity at zero and the regular error function gives you zero at minus infinity (mathematically in the limit sense, but numerically those functions are also defined for such input values). So without any numeric calculation, you can get the root value.

sympy symbolic integration sometimes returns integral sometimes return error

I am trying to calculate the following integral and I get either an error or a symbolic answer , SympPy isn't actually calculating the integral
from sympy import sin, cos, integrate, pi, symbols
t = symbols('t')
u =(-0.029788*sin(t)+0.00078986*cos(2*t)+0.9997)/(-0.019861*sin(t)+0.00039482*cos(2*t)+0.99961)
Q = integrate(u,(t,pi,2*pi))
display(Q)
If you only want numerical answer (as you can expect when using definite limits that are numeric) then you can skip the attempt to get a symbolic solution -- there may not be one per reference link in comment -- and just create an Integral object and ask for a numerical approximation:
...
>>> from sympy import Integral
>>> Integral(u,(t,pi,2*pi)).n(3)
3.16

Is there a way to integrate the product of two functions numerically in Python?

I have two functions which take multiple arguments:
import numpy as np
from scipy.integrate import quad
gamma_s=0.1 #eV
gamma_d=0.1 #eV
T=298 #K
homo=-5.5 #eV
Ef=-5 #eV
mu=0 #eV just displaces the function
#Fermi-Dirac distribution
k=8.617333262e-5 #eV/K
def fermi (E:float, mu:float, T:float) -> float:
return 1/(1+np.exp((E-mu)/(k*T)))
#Lorentzian density of states
gamma=gamma_d+gamma_s
def DoS (E:float, gamma:float, homo:float, Ef:float) -> float:
epsilon=homo-Ef
v=E-epsilon
u=gamma/2
return gamma/(np.pi*((v*v)+(u*u)))
I know that if I want to integrate just one of them, say fermi, then I would use
quad(fermi, -np.inf, np.inf, args=(mu,T))
But I need the integral of their product fermi*DoS with respect to their common variable E, and I can't imagine how to do it with quad, since there is no mention of it in the documentation.
I guess I could define another function integrand as their product and compute its integral, however that sounds somewhat messy and I would prefer a cleaner way of doing it.
You don't have to define a new, standalone function if something inline is more appealing to you:
quad(lambda e: fermi(e, mu, T) * DoS(e, gamma, homo, T), -np.inf, np.inf)
That is, we use partial application to turn the product of fermi and DoS into a new Python lambda.
Just to give some mathematical justification for the need to do something like this...
Mathematically speaking, one can only integrate (integrable) functions (or elements of function spaces derived from integrable functions). To integrate the product of two functions, we have to say which function is meant by their product. After a while, this may feel obvious, but here I think it's worth noting that humans defined
(fg)(x) := f(x)g(x).
In the same way one must give mathematical meaning to the product of functions, one must give meaning to the product of two Python functions. Especially because Python functions can return all sorts of things, many of which make no sense to multiply, so there couldn't be a general definition.

RuntimeError in solving equation using SymPy

I have a equation to solve. The equation can be described as the formula above. N and S are constants, for example N = 201 and S = 0.5. I use sympy in python to solve it. The python script is given as following:
from sympy import *
x=Symbol('x')
print solve( (((1-x)/200) **(1-x))* x**x - 2**(-0.5), x)
However, there is a RuntimeError: maximum recursion depth exceeded in __instancecheck__
I have also tried to use Mathematica, and it can output a result of 0.963
http://www.wolframalpha.com/input/?i=(((1-x)%2F200)+(1-x))*+xx+-+2**(-0.5)+%3D+0
Any suggestion is welcome. Thanks.
Assuming that you don't want a symbolic solution, just a value you can work with (like WA's 0.964), you can use mpmath for this. I'm not sure if it's actually possible to express the solution in radicals - WA certainly didn't even try. You should already have it installed as SymPy
Requires: mpmath
Specifically, mpmath.findroot seems to do what you want. It takes an actual callable Python object which is the function to find a root of, and a starting value for x. It also accepts some more parameters such as the minimum error tol and the solver to use which you could play around with, although they don't really seem necessary. You could quite simply use it like this:
import mpmath
f = lambda x: (((1-x)/200) **(1-x))* x**x - 2**(-0.5)
print mpmath.findroot(f, 1)
I just used 1 as a starting value - you could probably think of a better one. Judging by the shape of your graph, there's only one root to be found and it can be approached quite easily, without much need for fancy solvers, so this should suffice. Also, considering that "mpmath is a Python library for arbitrary-precision floating-point arithmetic", you should be able to get a very high precision answer from this if you wished. It has the output of
(0.963904761592753 + 0.0j)
This is actually an mpmath complex or mpc object,
mpc(real='0.96390476159275343', imag='0.0')
If you know it will have an imaginary value of 0, you can just use either of the following methods:
In [6]: abs(mpmath.mpc(23, 0))
Out[6]: mpf('23.0')
In [7]: mpmath.mpc(23, 0).real
Out[7]: mpf('23.0')
to "extract" a single float in the format of an mpf.

Python - solve polynomial for y

I'm taking in a function (e.g. y = x**2) and need to solve for x. I know I can painstakingly solve this manually, but I'm trying to find instead a method to use. I've browsed numpy, scipy and sympy, but can't seem to find what I'm looking for. Currently I'm making a lambda out of the function so it'd be nice if i'm able to keep that format for the the method, but not necessary.
Thanks!
If you are looking for numerical solutions (i.e. just interested in the numbers, not the symbolic closed form solutions), then there are a few options for you in the SciPy.optimize module. For something simple, the newton is a pretty good start for simple polynomials, but you can take it from there.
For symbolic solutions (which is to say to get y = x**2 -> x = +/- sqrt(y)) SymPy solver gives you roughly what you need. The whole SymPy package is directed at doing symbolic manipulation.
Here is an example using the Python interpreter to solve the equation that is mentioned in the question. You will need to make sure that SymPy package is installed, then:
>>>> from sympy import * # we are importing everything for ease of use
>>>> x = Symbol("x")
>>>> y = Symbol("y") # create the two variables
>>>> equation = Eq(x ** 2, y) # create the equation
>>>> solve(equation, x)
[y**(1/2), -y**(1/2)]
As you see the basics are fairly workable, even as an interactive algebra system. Not nearly as nice as Mathematica, but then again, it is free and you can incorporate it into your own programs. Make sure to read the Gotchas and Pitfalls section of the SymPy documentation on how to encode the appropriate equations.
If all this was to get a quick and dirty solutions to equations then there is always Wolfram Alpha.
Use Newton-Raphson via scipy.optimize.newton. It finds roots of an equation, i.e., values of x for which f(x) = 0. In the example, you can cast the problem as looking for a root of the function f(x) = x² - y. If you supply a lambda that computes y, you can provide a general solution thus:
def inverse(f, f_prime=None):
def solve(y):
return newton(lambda x: f(x) - y, 1, f_prime, (), 1E-10, 1E6)
return solve
Using this function is quite simple:
>>> sqrt = inverse(lambda x: x**2)
>>> sqrt(2)
1.4142135623730951
>>> import math
>>> math.sqrt(2)
1.4142135623730951
Depending on the input function, you may need to tune the parameters to newton(). The current version uses a starting guess of 1, a tolerance of 10-10 and a maximum iteration count of 106.
For an additional speed-up, you can supply the derivative of the function in question:
>>> sqrt = inverse(lambda x: x**2, lambda x: 2*x)
In fact, without it, the function actually uses the secant method instead of Newton-Raphson, which relies on knowing the derivative.
Check out SymPy, specifically the solver.

Categories