I was trying to obtain the Mathieu characteristic values for a specific problem. I do not have any problem obtaining them, and I have read the documentation from Scipy regarding these functions. The problem is that I know for a fact that the points I am obtaining are not right. My script to obtain the characteristic values I need is below:
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import mathieu_a, mathieu_b, mathieu_cem, mathieu_sem
M = 1.0
g = 1.0
l = 1.0
h = 0.06
U0 = M * g * l
q = 4 * M * l**2 * U0 / h**2
def energy(n, q):
if n % 2 == 0:
return (h**2 / (8 * M * l**2)) * mathieu_a(n, q) + U0
else:
return (h**2 / (8 * M * l**2)) * mathieu_b(n + 1, q) + U0
n_list = np.arange(0, 80, 1)
e_n = [energy(i, q) for i in n_list]
plt.plot(n_list, e_n, '.')
The resulting plot of these values is this one. There is a zone where it appears to be "noise" or a numerical error, and I know that those jumps must not occur. In reality, around x= 40 to x > 40, the points should behave like a staircase of two consecutive points, similar to what can be seen between 70 < x < 80. And the values that x can take for this case are only positive integers.
I saw that the implementation of the Mathieu function has some problems, see here. But this was six years ago! In the answer to this question they use the NAG Library for Python, but it is not exactly open-source.
Is there a way I can still use these functions from Scipy without having this problem? Or is it related to the precision I am using to obtain the Mathieu characteristic value?
Related
github issue post
Summarize the problem
trying to solve: this math problem using Python Sympy in Jupyter Notebook IDE: picture of math problem
Describe what you’ve tried:
picture of my jupyter notebook
Here I have imported sympy, created a non-type variable a_n, then tried to rewrite the expression in my notebook. The problem I came across was trying to do algebra in the subscript space.
from sympy import *
a_n = symbols('a_n')
Eq(a_n,3*a_n-2)
Update after 6 hours
with help from the comment by user dancxviii I have changed the code to be
from sympy import *
from sympy import sequence
from sympy.abc import n
a_n = symbols("a_n")
a_n1 = symbols("a_{n-1}")
a_n2 = symbols("a_{n-2}")
a_n3 = symbols("a_{n-3}")
e1 = Eq(a_n,3*a_n2+(28*a_n2*a_n3)/a_n1)
e1
displayed in LateX on notebook it looks like this
Hoping to be able to do .subs() on a_n and replace the actual value of n if that's possible
You can use:
from sympy import *
a_n = symbols('a_n')
a_n_minus_2 = symbols('a_{n-2}')
Eq(a_n,3*a_n_minus_2)
In case you are having trouble with the problem given (I can't tell from your question), the best way to solve it is:
Let r_n = a_(n + 2) / a_(n).
After dividing the recurrence relation on both sides by a_(n - 2):
r_(n - 2) = 3 + 28 / (r_(n - 3)) => r_n = 3 + 28 / r_(n - 1)
The fixed points for this recurrence relation are at -4 and 7. Therefore the closed form will have the form:
r_n = (w * 7^n + x * (-4)^n) / (y * 7^n + z * (-4)^n)
This is a 4 variable system where 1 of the variables is redundant, so you can set w to 1.
If you plug this form into the initial recurrence relation with the initial value of r_0 = 17 / 6, the resulting series of linear equations (done on paper) gives:
w = 1
x = -100 / 287
y = 1 / 7
z = 25 / 287
Is it possible to solve Cubic equation without using sympy?
Example:
import sympy as sp
xp = 30
num = xp + 4.44
sp.var('x, a, b, c, d')
Sol3 = sp.solve(0.0509 * x ** 3 + 0.0192 * x ** 2 + 3.68 * x - num, x)
The result is:
[6.07118098358257, -3.2241955998463 - 10.0524891203436*I, -3.2241955998463 + 10.0524891203436*I]
But I want to find a way to do it with numpy or without 3 part lib at all
I tried with numpy:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, --4.44]
print(np.roots(coeff))
But the result is :
[ 0.40668245+8.54994773j 0.40668245-8.54994773j -1.19057511+0.j]
In your numpy method you are making two slight mistakes with the final coefficient.
In the SymPy example your last coefficient is - num, this is, according to your code: -num = - (xp + 4.44) = -(30 + 4.44) = -34.44
In your NumPy example yout last coefficient is --4.44, which is 4.44 and does not equal -34.33.
If you edit the NumPy code you will get:
import numpy as np
coeff = [0.0509, 0.0192, 3.68, -34.44]
print(np.roots(coeff))
[-3.2241956 +10.05248912j -3.2241956 -10.05248912j
6.07118098 +0.j ]
The answer are thus the same (note that NumPy uses j to indicate a complex number. SymPy used I)
You could implement the cubic formula
this Youtube video from mathologer could help understand it.
Based on that, the cubic function for ax^3 + bx^2 + cx + d = 0 can be written like this:
def cubic(a,b,c,d):
n = -b**3/27/a**3 + b*c/6/a**2 - d/2/a
s = (n**2 + (c/3/a - b**2/9/a**2)**3)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3) - b/3/a
r1 = (n+s)**(1/3)+(n+s)**(1/3) - b/3/a
r2 = (n-s)**(1/3)+(n-s)**(1/3) - b/3/a
return (r0,r1,r2)
The simplified version of the formula only needs to get c and d as parameters (aka p and q) and can be implemented like this:
def cubic(p,q):
n = -q/2
s = (q*q/4+p**3/27)**0.5
r0 = (n-s)**(1/3)+(n+s)**(1/3)
r1 = (n+s)**(1/3)+(n+s)**(1/3)
r2 = (n-s)**(1/3)+(n-s)**(1/3)
return (r0,r1,r2)
print(cubic(-15,-126))
(5.999999999999999, 9.999999999999998, 2.0)
I'll let you mix in complex number operations to properly get all 3 roots
I want to do a plot of this equation below:
Problem 1: You see... since my function is a function of ν I have to calculate my integral to each ν in my domain. My question is: what is the best way to do that?
I thought about using scipy to do the integral and a for loop to calculate it several times to each ν, but it seems a very inelegant way to solve my problem. Does someone know a better alternative? Does someone have a different idea?
Problem 2: When I write my code I get some errors, mainly because I think that the exponential has a very small expoent. Do you have any ideas of how should I change it so I can do this plot using Python?
Oh, if you try with a different method, it is supposed to look like this
Here is the code I was working on. I'm coming back to Python now, so maybe there is some errors. The plot I'm getting is very different from the one that this is supposed to look.
from scipy.integrate import quad
from scipy.constants import c, Planck, k, pi
import numpy as np
import matplotlib.pyplot as plt
def luminosity_integral(r, x):
T_est = 4000
R_est = 2.5 * (696.34*1e6)
Temp = ((2/(3*pi))**(1/4)) * T_est * ((R_est/r)**(3/4))
termo1 = ((4 * (pi**2) * Planck * (x**4) ) / (c**2))
termo2 = ((Planck * x) / (k*Temp))
return ((termo1 * r ) / (np.exp(termo2) - 1))
freqs = np.linspace(1e10, 1e16)
y = np.array([])
for i in freqs:
I = quad(luminosity_integral, (6 * 2.5 * (696.34*1e6)), (7e4 * 2.5 * (696.34*1e6)), args = (i))
temp = np.array([I[0]])
y = np.concatenate((y, temp))
plt.loglog(freqs, y)
plt.show()
Reuse the term R_est instead instead of writing its expression 3 times (better if you want to change that parameter).
you used a pi**2 in the constant multiplying the integral (don't affect the shape)
The shape resembles what you put as reference, but not in the suggested range.
You are using the value of T as T_*, are you sure about that?
Try this version of the code
from scipy.integrate import quad
from scipy.constants import c, Planck, k, pi
import numpy as np
import matplotlib.pyplot as plt
R_est = 2.5 * (696.34e6)
def luminosity_integral(r, x):
T_est = 4000
termo1 = ((4 * pi * Planck * (x**4) ) / (c**2))
termo2 = ((Planck * x) / (k*T_est)) * (3*pi/2 * (r/R_est)**3)**0.25
termo3 = np.exp(-termo2)
return ((termo1 * r ) * termo3 / (1 - termo3))
freqs = np.logspace(6, 16)
y = np.zeros_like(freqs)
for i, nu in enumerate(freqs):
y[i] = quad(luminosity_integral, (6* R_est), (7e4 * R_est), args = (nu))[0]
plt.loglog(freqs, y)
plt.ylim([1e6, 1e25])
plt.show()
The FFT code below did not give the result similar to scipy library of Python. But I don't know what's wrong in this code.
import numpy as np
import matplotlib.pyplot as plt
#from scipy.fftpack import fft
def omega(p, q):
return np.exp((-2j * np.pi * p) / q)
def fft(x):
N = len(x)
if N <= 1: return x
even = fft(x[0::2])
odd = fft(x[1::2])
combined = [0] * N
for k in range(N//2):
combined[k] = even[k] + omega(k,N) * odd[k]
combined[k + N//2] = even[k] - omega(k,N) * odd[k]
return combined
N = 600
T = 1.0 / 800.0
x = np.linspace(0, N*T, N)
#y = np.sin(50.0 * 2.0*np.pi*x) + 0.5*np.sin(80.0 * 2.0*np.pi*x)
y = np.sin(50.0 * 2.0*np.pi*x)
xf = np.linspace(0.0, 1.0/(2.0*T), N//2)
yf = fft(y)
yfa = 2.0/N * np.abs(yf[0:N//2])
plt.plot(xf, yfa)
plt.show()
This gives:
All the above comments, i.e. roundoff errors and implementation correctness, are true but you missed an important thing... FFT Cooley and Tukey original algorithm is working only if the number of samples N is a power of 2. You did notice that
np.allclose(yfa,yfa_sp)
>>> False
for your current input N = 600, the discrepancies are huge between your output and numpy/scipy. But now, let's use the closest power of two, in this case N = 2**9 = 512, which gives
np.allclose(yfa,yfa_sp)
>>> True
Wonderful! Outputs are now identical this time, and it can be verified for other powers of 2 (Nyquist criterion apart) sizes of input signal y. For in depth explanations, you may read accepted answer of this question to understand why numpy/scipy fft functions may allow all N (with most efficiency when N is a power of two, and least efficiency when N is prime) instead of just handling this error as you should have, with something like:
if np.log2(N) % 1 > 0:
raise ValueError('size of input y must be a power of 2')
or even, using bitwise and operator (a truly elegant test imo):
if N & N-1:
raise ValueError('size of input y must be a power of 2')
As suggested in the comments, if size of the signal could't be modified so easily, zero-padding is definitely the way to go for this kind of sampling issue.
Hope this helps.
I have a coupled system of differential equations that I've already solved with Euler in Excel. Now I want to make it more precise with an ODE-solver in python.
However, there must be a mistake in my code because the curves look different than in Excel. I don't expect the curves to reach 1 and 0 in the end.
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# define reactor
def reactor(x,z):
n_a = x[0]
n_b = x[1]
n_c = x[2]
dn_adz = A * (-1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dn_bdz = A * (1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dn_cdz = A * (1) * B * (n_a/(n_a + n_b + n_c)) / (1 + C * (n_c/(n_a + n_b + n_c)))
dxdz = [dn_adz,dn_bdz,dn_cdz]
return dxdz
# initial conditions
n_a0 = 0.5775
n_b0 = 0.0
n_c0 = 0.0
x0 = [n_a0, n_b0, n_c0]
# parameters
A = 0.12
B = 3.1e-9
C = 4.02e15
# number of steps
n = 100
# z step interval (m)
z = np.linspace(0,0.0274,n)
# solve ODEs
x = odeint(reactor,x0,z)
# Plot the results
plt.plot(z,x[:,0],'b-')
plt.plot(z,x[:,1],'r--')
plt.plot(z,x[:,2],'k:')
plt.show()
Is is a problem with the initial condition that stays constant and does not change from step to step?
Should it be like in Excel with Euler, where the next step uses the conditions/values of the precious step?
From the structure of the right sides you get constant combinations of the state variables, n_a+n_b=n_a0+n_b0 and n_a+n_c=n_a0+n_c0. This means that the dynamic reduces to the one-dimensional dynamic of n_a.
By the first equation, the derivative of n_a is negative for positive n_a, so that the solution is falling towards n_a=0. By the constants of the dynamics, n_b converges to n_a0+n_b0 and n_c converges to n_a0+n_c0.
It is unclear how you get convergence towards 1 in some components, as that is not supported by the initial conditions. Apart from that, the described odeint result fits this qualitative behavior.