I am trying to simulate a two-dimensional two charge situation using Euler's Method and the Runge-Kutta 4th Order Method. I have gotten relatively expected answers using both methods. But I had never tried using my RK4 method with different initial conditions.
Starting with a positive initial velocity vx0 and a negative initial x-position x0, the code seems to work just fine. But when I flip them so that vx0 is negative and x0 is positive, I get different answers, when they should be symmetric.
I made sure that my Euler method worked for both variations of initial conditions to confirm that it was my RK4 function that was the problem. I struggled initially to apply the RK4 method for two dimensional motion, so it doesn't surprise me that this error has come about. Below is an image that might make the problem a bit clearer.
Here is where I calculate the constants:
# Initial conditions
time[0] = t = t0
vx1[0] = vx = vx0
vy1[0] = vy = vy0
x1[0] = x = x0
y1[0] = y = y0
for i in range(1, n + 1):
# Calculate our constants
k1vx = step * ax(x, y, q1, q2, m)
k1vy = step * ay(x, y, q1, q2, m)
k1x = step * vx
k1y = step * vy
k2vx = step * ax(x + 0.5 * k1x, y + 0.5 * k1y, q1, q2, m)
k2vy = step * ay(x + 0.5 * k1x, y + 0.5 * k1y, q1, q2, m)
k2x = step * (vx + (k1vx / 2))
k2y = step * (vy + (k1vy / 2))
k3vx = step * ax(x + 0.5 * k2x, y + 0.5 * k2y, q1, q2, m)
k3vy = step * ay(x + 0.5 * k2x, y + 0.5 * k2y, q1, q2, m)
k3x = step * (vx + (k2vx / 2))
k3y = step * (vy + (k2vy / 2))
k4vx = step * ax(x + k3x, y + k3y, q1, q2, m)
k4vy = step * ay(x + k3x, y + k3y, q1, q2, m)
k4x = step * (vx + k3vx)
k4y = step * (vy + k3vy)
# Update the values based on our calculated constants
vx1[i] = vx = vx + (k1vx + k2vx + k2vx + k3vx + k3vx + k4vx) / 6
vy1[i] = vy = vy + (k1vx + k2vy + k2vy + k3vy + k3vy + k4vy) / 6
x1[i] = x = x + ((k1x + 2 * k2x + 2 * k3x + k4x) / 6)
y1[i] = y = y + ((k1y + 2 * k2y + 2 * k3y + k4y) / 6)
# Update the time
time[i] = t = t0 + i * step
Here are the functions that I use for ax and ay in the previous code
def accel_rk4_x(x, y, q1, q2, m):
const = (q1 * q2) / (4 * math.pi * 8.854e-12 * m)
return const * (x / ((x ** 2 + y ** 2) ** 1.5))
def accel_rk4_y(x, y, q1, q2, m):
const = (q1 * q2) / (4 * math.pi * 8.854e-12 * m)
return const * (y / ((x ** 2 + y ** 2) ** 1.5))
I appreciate any help with this problem! I might just need a second pair of eyes.
Related
Trying to replace the function U2(x,y,z) with specified values of x,y,z. Not sure how to do that with sympy because they are as "x = arange.(-h,h,0.001)" as seen in the code below.
Below you will find my implementation with sympy. Additionally I am using PyCharm.
This implementation is based on Dr. Annabestani and Dr. Naghavis' paper: A 3D analytical ion transport model for ionic polymer metal composite actuators in large bending deformations
import sympy as sp
h = 0.1 # [mm] half of thickness
W: float = 6 # [mm] width
L: float = 28 # [mm] length
F: float = 96458 # [C/mol] Faraday's constant
k_e = 1.34E-6 # [F/m]
Y = 5.71E8 # [Pa]
d = 1.03 - 11 # [m^2/s] diffiusitivity coefficient
T = 293 # [K]
C_minus = 1200 # [mol/m^3] Cation concentration
C_plus = 1200 # [mol/m^3] anion concentration
R = 8.3143 # [J/mol*K] Gas constant
Vol = 2*h*W*L
#dVol = diff(Vol,x) + diff(Vol, y) + diff(Vol, z) # change in Volume
theta = 1 / W
x, y, z, m, n, p, t = sp.symbols('x y z m n p t')
V_1 = 0.5 * sp.sin(2 * sp.pi * t) # Voltage as a function of time
k_f = 0.5
t_f = 44
k_g = 4.5
t_g = 0.07
B_mnp = 0.003
b_mnp: float = B_mnp
gamma_hat_2 = 0.04
gamma_hat_5 = 0.03
gamma_hat_6 = 5E-3
r_M = 0.15 # membrane resistance
r_ew = 0.175 # transverse resistance of electrode
r_el = 0.11 # longitudinal resistance of electrode
mu = 2.4
sigma_not = 0.1
a_L: float = 1.0 # distrubuted surface attentuation
r_hat = sp.sqrt(r_M ** 2 + r_ew ** 2 + r_el ** 2)
lambda_1 = 0.0001
dVol = 1
K = (F ** 2 * C_minus * d * (1 - C_minus * dVol)) / (R * T * k_e) # also K = a
K_hat = (K-lambda_1)/d
gamma_1 = 1.0
gamma_2 = 1.0
gamma_3 = 1.0
gamma_4 = 1.0
gamma_5 = 1.0
gamma_6 = 1.0
gamma_7 = 1.0
small_gamma_1 = 1.0
small_gamma_2 = 1.0
small_gamma_3 = 1.0
psi = gamma_1*x + gamma_2*y + gamma_3*z + gamma_4*x*y + gamma_5*x*z + gamma_6*y*z + gamma_7*x*y*z + (small_gamma_1/2)*x**2 + (small_gamma_2/2)*y**2 + (small_gamma_3/2)*x*z**2
psi_hat_part = ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * ((sp.sin(((p + 1) * sp.pi) / L)) * z)
psi_hat = psi * psi_hat_part # Eqn. 19
print(psi_hat)
x1: float = -h
x2: float = h
y1: float = 0
y2: float = W
z1: float = 0
z2: float = L
I = psi_hat.integrate((x, x1, x2), (y, y1, y2), (z, z1, z2)) # Integration for a_mnp Eqn. 18
A_mnp = ((8 * K_hat) / (2 * h * W * L)) * I
Partial = A_mnp * ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * ((sp.sin(((p + 1) * sp.pi) / L)) * z)
start = Partial.integrate((p, 0 , 10E9), (n, 0, 10E9), (m, 0, 10E9)) #when using infinity it goes weird, also integrating leads to higher thresholds than summation
a_mnp_denom = (((sp.sin(((m + 1) * sp.pi) / 2 * h)) ** 2) * ((sp.sin(((n + 1) * sp.pi) / W)) ** 2) * (
(sp.sin(((p + 1) * sp.pi) / L)) ** 2) + K_hat)
a_mnp = A_mnp / a_mnp_denom # Eqn. 18
U2 = sp.Function("U2")
U2 = a_mnp * ((sp.sin(((m + 1) * sp.pi) / 2 * h)) * x) * ((sp.sin(((n + 1) * sp.pi) / W)) * y) * (
(sp.sin(((p + 1) * sp.pi) / L)) * z) # Eqn. 13
x = np.arange(-h, h, 0.001)
y = np.arange(-h, h, 0.001)
z = np.arange(-h, h, 0.001)
f= sp.subs((U2), (x ,y ,z))
I currently get the error message: ValueError: subs accepts either 1 or 2 arguments. So that means I can't use the subs() method and replace() also doesn't work too well. Are there any other methods one can use?
Any help will be grateful, thank you!
Oscar is right: you are trying to deal with too much of the problem at once. That aside, Numpy and SymPy do not work like you think they do. What were you hoping to see when you replaced 3 variables, each with a range?
You cannot replace a SymPy variable/Symbol with a Numpy arange object, but you can replace a Symbol with a single value:
>>> from sympy.abc import x, y
>>> a = 1.0
>>> u = x + y + a
>>> u.subs(x, 1)
y + 2.0
>>> u.subs([(x,1), (y,2)])
4.0
You might iterate over the arange values, creating values of f and then doing something with each value:
f = lambda v: u.subs(dict(zip((x,y),v)))
for xi in range(1,3): # replace range with your arange call
for yi in range(-4,-2):
fi = f((xi,yi))
print(xi,yi,fi)
Be careful about iterating and using x or y as your loop variable, however, since that will then lose the assignment of the Symbol to that variable,
for x in range(2):
print(u.subs(x, x)) # no change and x is no longer a Symbol, it is now an int
I'm having trouble with the slow computation of my Python code. Based on the pycallgraph below, the bottleneck seems to be the module named miepython.miepython.mie_S1_S2 (highlighted by pink), which takes 0.47 seconds per call.
The source code for this module is as follows:
import numpy as np
from numba import njit, int32, float64, complex128
__all__ = ('ez_mie',
'ez_intensities',
'generate_mie_costheta',
'i_par',
'i_per',
'i_unpolarized',
'mie',
'mie_S1_S2',
'mie_cdf',
'mie_mu_with_uniform_cdf',
)
#njit((complex128, float64, float64[:]), cache=True)
def _mie_S1_S2(m, x, mu):
"""
Calculate the scattering amplitude functions for spheres.
The amplitude functions have been normalized so that when integrated
over all 4*pi solid angles, the integral will be qext*pi*x**2.
The units are weird, sr**(-0.5)
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
mu: array of angles, cos(theta), to calculate scattering amplitudes
Returns:
S1, S2: the scattering amplitudes at each angle mu [sr**(-0.5)]
"""
nstop = int(x + 4.05 * x**0.33333 + 2.0) + 1
a = np.zeros(nstop - 1, dtype=np.complex128)
b = np.zeros(nstop - 1, dtype=np.complex128)
_mie_An_Bn(m, x, a, b)
nangles = len(mu)
S1 = np.zeros(nangles, dtype=np.complex128)
S2 = np.zeros(nangles, dtype=np.complex128)
nstop = len(a)
for k in range(nangles):
pi_nm2 = 0
pi_nm1 = 1
for n in range(1, nstop):
tau_nm1 = n * mu[k] * pi_nm1 - (n + 1) * pi_nm2
S1[k] += (2 * n + 1) * (pi_nm1 * a[n - 1]
+ tau_nm1 * b[n - 1]) / (n + 1) / n
S2[k] += (2 * n + 1) * (tau_nm1 * a[n - 1]
+ pi_nm1 * b[n - 1]) / (n + 1) / n
temp = pi_nm1
pi_nm1 = ((2 * n + 1) * mu[k] * pi_nm1 - (n + 1) * pi_nm2) / n
pi_nm2 = temp
# calculate norm = sqrt(pi * Qext * x**2)
n = np.arange(1, nstop + 1)
norm = np.sqrt(2 * np.pi * np.sum((2 * n + 1) * (a.real + b.real)))
S1 /= norm
S2 /= norm
return [S1, S2]
Apparently, the source code is jitted by Numba so it should be faster than it actually is. The number of iterations in for loop in this function is around 25,000 (len(mu)=50, len(a)-1=500).
Any ideas on how to speed up this computation? Is something hindering the fast computation of Numba? Or, do you think the computation is already fast enough?
[More details]
In the above, another function _mie_An_Bn is being used. This function is also jitted, and the source code is as follows:
#njit((complex128, float64, complex128[:], complex128[:]), cache=True)
def _mie_An_Bn(m, x, a, b):
"""
Compute arrays of Mie coefficients A and B for a sphere.
This estimates the size of the arrays based on Wiscombe's formula. The length
of the arrays is chosen so that the error when the series are summed is
around 1e-6.
Args:
m: the complex index of refraction of the sphere
x: the size parameter of the sphere
Returns:
An, Bn: arrays of Mie coefficents
"""
psi_nm1 = np.sin(x) # nm1 = n-1 = 0
psi_n = psi_nm1 / x - np.cos(x) # n = 1
xi_nm1 = complex(psi_nm1, np.cos(x))
xi_n = complex(psi_n, np.cos(x) / x + np.sin(x))
nstop = len(a)
if m.real > 0.0:
D = _D_calc(m, x, nstop + 1)
for n in range(1, nstop):
temp = D[n] / m + n / x
a[n - 1] = (temp * psi_n - psi_nm1) / (temp * xi_n - xi_nm1)
temp = D[n] * m + n / x
b[n - 1] = (temp * psi_n - psi_nm1) / (temp * xi_n - xi_nm1)
xi = (2 * n + 1) * xi_n / x - xi_nm1
xi_nm1 = xi_n
xi_n = xi
psi_nm1 = psi_n
psi_n = xi_n.real
else:
for n in range(1, nstop):
a[n - 1] = (n * psi_n / x - psi_nm1) / (n * xi_n / x - xi_nm1)
b[n - 1] = psi_n / xi_n
xi = (2 * n + 1) * xi_n / x - xi_nm1
xi_nm1 = xi_n
xi_n = xi
psi_nm1 = psi_n
psi_n = xi_n.real
The example inputs are like the followings:
m = 1.336-2.462e-09j
x = 8526.95
mu = np.array([-1., -0.7500396, 0.46037385, 0.5988121, 0.67384093, 0.72468684, 0.76421644, 0.79175856, 0.81723714, 0.83962897, 0.85924182, 0.87641596, 0.89383665, 0.90708978, 0.91931481, 0.93067567, 0.94073113, 0.94961222, 0.95689496, 0.96467123, 0.97138347, 0.97791831, 0.98339434, 0.98870543, 0.99414948, 0.9975728 0.9989995, 0.9989995, 0.9989995, 0.9989995, 0.9989995,0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899951, 0.99899952, 0.99899952,
0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 0.99899952, 1. ])
I am focussing on _mie_S1_S2 since it appear to be the most expensive function on the provided example dataset.
First of all, you can use the parameter fastmath=True to the JIT to accelerate the computation if there is no values like +Inf, -Inf, -0 or NaN computed.
Then you can pre-compute some expensive expression containing divisions or implicit integer-to-float conversions. Note that (2 * n + 1) / n = 2 + 1/n and (n + 1) / n = 1 + 1/n. This can be useful to reduce the number of precomputed array but did not change the performance on my machine (this may change regarding the target architecture). Note also that such a precomputation have a slight impact on the result accuracy (most of the time negligible and sometime better than the reference implementation).
On my machine, this strategy make the code 4.5 times faster with fastmath=True and 2.8 times faster without.
The k-based loop can be parallelized using parallel=True and prange of Numba. However, this may not be always faster on all machines (especially the ones with a lot of cores) since the loop is pretty fast.
Here is the final code:
#njit((complex128, float64, float64[:]), cache=True, parallel=True)
def _mie_S1_S2_opt(m, x, mu):
nstop = int(x + 4.05 * x**0.33333 + 2.0) + 1
a = np.zeros(nstop - 1, dtype=np.complex128)
b = np.zeros(nstop - 1, dtype=np.complex128)
_mie_An_Bn(m, x, a, b)
nangles = len(mu)
S1 = np.zeros(nangles, dtype=np.complex128)
S2 = np.zeros(nangles, dtype=np.complex128)
factor1 = np.empty(nstop, dtype=np.float64)
factor2 = np.empty(nstop, dtype=np.float64)
factor3 = np.empty(nstop, dtype=np.float64)
for n in range(1, nstop):
factor1[n - 1] = (2 * n + 1) / (n + 1) / n
factor2[n - 1] = (2 * n + 1) / n
factor3[n - 1] = (n + 1) / n
nstop = len(a)
for k in nb.prange(nangles):
pi_nm2 = 0
pi_nm1 = 1
for n in range(1, nstop):
i = n - 1
tau_nm1 = n * mu[k] * pi_nm1 - (n + 1.0) * pi_nm2
S1[k] += factor1[i] * (pi_nm1 * a[i] + tau_nm1 * b[i])
S2[k] += factor1[i] * (tau_nm1 * a[i] + pi_nm1 * b[i])
temp = pi_nm1
pi_nm1 = factor2[i] * mu[k] * pi_nm1 - factor3[i] * pi_nm2
pi_nm2 = temp
# calculate norm = sqrt(pi * Qext * x**2)
n = np.arange(1, nstop + 1)
norm = np.sqrt(2 * np.pi * np.sum((2 * n + 1) * (a.real + b.real)))
S1 /= norm
S2 /= norm
return [S1, S2]
%timeit -n 1000 _mie_S1_S2_opt(m, x, mu)
On my machine with 6 cores, the final optimized implementation is 12 times faster with fastmath=True and 8.8 times faster without. Note that using similar strategies in other functions may also helps to speed up them.
So I am looking to solve a system of equations in python 3.7 with numpy. However, I need to solve the system of equations at the end of each iteration. During the iterations, it will solve some equations that will make up the contents of A and B to find x in the form of Ax=B. Upon solving for x I need to save these values to then solve the underlying equations for the following iteration to be reimplemented in A and B.
I have tried a more linear approach to solving the problem but it is not good for my end goal of solving the equation attached in the image. What I have done so far has also been attached below:
i = 0
while (y[i] >= 0 ): #Object is above water
t[i+1] = t[i] + dt
vx[i+1] = vx[i] + dt * ax[i] #Update the velocities
vy[i+1] = vy[i] + dt * ay[i]
v_ax[i+1] = (vx[i]*np.sin(phi/180*np.pi)) - (vy[i]*np.cos(phi/180*np.pi))
v_nor[i+1] = (vx[i]*np.cos(phi/180*np.pi)) + (vy[i]*np.sin(phi/180*np.pi))
F_wnor[i+1] = (Cd_a * A_da * rho_air * (v_nor[i] - v_wind*np.sin(phi/180*np.pi)) * abs(v_nor[i] - v_wind*np.sin(phi/180*np.pi)))/2
F_wax[i+1] = (Cd_a * A_da * rho_air * (v_ax[i] - v_wind*np.sin(phi/180*np.pi)) * abs(v_ax[i] - v_wind*np.sin(phi/180*np.pi)))/2
F_wx[i+1] = (-F_wax[i] * np.sin(phi/180*np.pi)) - (F_wnor[i] * np.cos(phi/180*np.pi))
F_wy[i+1] = (F_wax[i] * np.cos(phi/180*np.pi)) - (F_wnor[i] * np.sin(phi/180*np.pi))
ax[i+1] = F_wx[i]/M
ay[i+1] = (F_wx[i]/M) - g
y[i+1] = (y[i]+dt*vy[i])
x[i+1] = (x[i]+dt*vx[i])
i = i + 1
j = i
#under water velocities
# if y(t)>0: M*z'' = M.g - Fb + Fd + Fm
while (y[j] <= 0 and y[j] > -10):
if (abs(y[j]/r)< 2):
theta_degree = 2 * np.arccos(1 - (abs(y[j])/r))
theta = theta_degree/180*np.pi
m = ((rho_water * r**2)/2) * (((2*(np.pi)**3*(1-np.cos(theta))) / ( 3 * (2*np.pi-theta)**2)) \
+ (np.pi * (1-np.cos(theta)*1/3)) + (np.sin(theta)) - (theta))
dm_dz = ((rho_water * r)/np.sin(theta/2)) * (((2 * (np.pi)**3 / 3) * ((np.sin(theta) / (2*np.pi - theta)**2) \
+ (2 * (1-np.cos(theta)) / (2*np.pi - theta )**3))) + (np.pi * np.sin(theta) / 3) + np.cos(theta) - 1)
A_i = (r**2)/2 * (theta - np.sin(theta))
F_m[j] = - m * ay[j] - dm_dz * np.max(vy)*vy[j]
F_uwater[j] = (M * g) - (rho_water * A_i * g) - (Cd_y * rho_water * r * vy[j] * abs(vy[j]))
else:
m = np.pi * rho_water * r**2
dm_dz = 0
A_i = np.pi * r**2
F_m[j] = - m * ay[j] - dm_dz * vy[j]**2
F_uwater[j] = (M * g) - (rho_water * A_i * g) - (Cd_y * rho_water * r * vy[j] * abs(vy[j]))
print("Fully submerged")
t[j+1] = t[j] + dt
vx[j+1] = vx[j] + dt * ax[j] #Update the velocities
vy[j+1] = vy[j] + dt * ay[j]
ax[j+1] = F_wx[j]/M
ay[j+1] = (F_uwater[j] + F_m[j]/M)
y[j+1] = (y[j]+dt*vy[j])
x[j+1] = (x[j]+dt*vx[j])
print(y[j])
j = j + 1
I do not know how to go about this and help for getting started would be greatly appreciated!.
The problem I am trying to solve can be seen more clearly in the picture attached. System of equations I am trying to solve
I'm trying to solve system of two odes numerically by runge-kutta 4th order method.
initial system:
system to solve:
And I have very strange solution graph...
I have:
Correct graph:
I can't find trouble in my runge-kutta. Please, help me.
My code is here:
dt = 0.04
#initial conditions
t.append(0)
zdot.append(0)
z.append(A)
thetadot.append(0)
theta.append(B)
#derrive functions
def zdotdot(z_cur, theta_cur):
return -omega_z * z_cur - epsilon / 2 / m * theta_cur
def thetadotdot(z_cur, theta_cur):
return -omega_theta * theta_cur - epsilon / 2 / I * z_cur
i = 0
while True:
# runge_kutta
k1_zdot = zdotdot(z[i], theta[i])
k1_thetadot = thetadotdot(z[i], theta[i])
k2_zdot = zdotdot(z[i] + dt/2 * k1_zdot, theta[i])
k2_thetadot = thetadotdot(z[i], theta[i] + dt/2 * k1_thetadot)
k3_zdot = zdotdot(z[i] + dt/2 * k2_zdot, theta[i])
k3_thetadot = thetadotdot(z[i], theta[i] + dt/2 * k2_thetadot)
k4_zdot = zdotdot(z[i] + dt * k3_zdot, theta[i])
k4_thetadot = thetadotdot(z[i], theta[i] + dt * k3_thetadot)
zdot.append (zdot[i] + (k1_zdot + 2*k2_zdot + 2*k3_zdot + k4_zdot) * dt / 6)
thetadot.append (thetadot[i] + (k1_thetadot + 2*k2_thetadot + 2*k3_thetadot + k4_thetadot) * dt / 6)
z.append (z[i] + zdot[i + 1] * dt)
theta.append (theta[i] + thetadot[i + 1] * dt)
i += 1
Your state has 4 components, thus you need 4 slopes in each stage. It should be obvious that the slope/update for z can not come from k1_zdot, it has to be k1_z which is to be computed previously as
k1_z = zdot
and in the next stage
k2_z = zdot + dt/2*k1_zdot
etc.
But better is to use a vectorized interface
def derivs(t,u):
z, theta, dz, dtheta = u
ddz = -omega_z * z - epsilon / 2 / m * theta
ddtheta = -omega_theta * theta - epsilon / 2 / I * z
return np.array([dz, dtheta, ddz, ddtheta]);
and then use the standard formulas for RK4
i = 0
while True:
# runge_kutta
k1 = derivs(t[i], u[i])
k2 = derivs(t[i] + dt/2, u[i] + dt/2 * k1)
k3 = derivs(t[i] + dt/2, u[i] + dt/2 * k2)
k4 = derivs(t[i] + dt, u[i] + dt * k3)
u.append (u[i] + (k1 + 2*k2 + 2*k3 + k4) * dt / 6)
i += 1
and later unpack as
z, theta, dz, dtheta = np.asarray(u).T
I have stiff system of differential equations given to the first-order ODE. This system is written in Maple. The default method used by Maple is the Rosenbrock method. Now my task is to solve these equations with python tools.
1) I do not know how to write the equations in the python code.
2) I do not know how to solve the equations with numpy, scipy, matplotlib or PyDSTool. For the library PyDSTool I did not find any examples at all, although I read that it is well suited for stiff systems.
Code:
import numpy
import scipy
import matplotlib
varepsilon = pow(10, -2); j = -2.5*pow(10, -2); e = 3.0; tau = 0.3; delta = 2.0
u0 = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) / 6
u = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) * (1 + delta) / 6
v = 1 / (1 - 2 / e) * math.sqrt(j ** 2 + (1 - 2 / e) * (e ** 2 * u ** 2 + 1))
y8 = lambda y1,y5,y7: 1 / (1 - 2 / y1) * math.sqrt(y5 ** 2 + (1 - 2 / y1) * (1 + y1 ** 2 * y7 ** 2))
E0 = lambda y1,y8: (1 - 2 / y1) * y8
Phi0 = lambda y1,y7: y1 ** 2 * y7
y08 = y8(y1=e, y5=j, y7=u0);
E = E0(y1=e, y8=y08); Phi = Phi0(y1=e, y7=u0)
# initial values
z01 = e; z03 = 0; z04 = 0; z05 = j; z07 = u0; z08 = y08;
p1 = -z1(x)*z5(x)/(z1(x)-2);
p3 = -z1(x)^2*z7(x);
p4 = z8(x)*(1-2/z1(x));
Q1 = -z5(x)^2/(z1(x)*(z1(x)-2))+(z8(x)^2/z1(x)^3-z7(x)^2)*(z1(x)-2);
Q3 = 2*z5(x)*z7(x)/z1(x);
Q4 = 2*z5(x)*z8(x)/(z1(x)*(z1(x)-2));
c1 = z1(x)*z7(x)*varepsilon;
c3 = -z1(x)*z5(x)*varepsilon;
C = z7(x)*varepsilon/z1(x)-z8(x)*(1-2/z1(x));
d1 = -z1(x)*z8(x)*varepsilon;
d3 = z1(x)*z5(x)*varepsilon;
B = z1(x)^2*z7(x)-z8(x)*varepsilon*(1-2/z1(x));
Omega = 1/(c1*d3*p3+c3*d1*p4-c3*d3*p1);
# differential equations
diff(z1(x), x) = z5(x);
diff(z3(x), x) = z7(x);
diff(z4(x), x) = z8(x);
diff(z5(x), x) = Omega*(-Q1*c1*d3*p3 - Q1*c3*d1*p4 + Q1*c3*d3*p1 + B*c3*p4 + C*d3*p3 + E*d3*p3 - Phi*c3*p4);
diff(z7(x), x) = -Omega*(Q3*c1*d3*p3 + Q3*c3*d1*p4 - Q3*c3*d3*p1 + B*c1*p4 - C*d1*p4 + C*d3*p1 - E*d1*p4 + E*d3*p1 - Phi*c1*p4);
diff(z8(x), x) = Omega*(-Q4*c1*d3*p3 - Q4*c3*d1*p4 + Q4*c3*d3*p1 + B*c1*p3 - B*c3*p1 - C*d1*p3 - E*d1*p3 - Phi*c1*p3 + Phi*c3*p1);
#features to be found and built curve
{z1(x), z3(x), z4(x), z5(x), z7(x), z8(x)}
After drifting on the Internet, I found something in principle:
import math
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
from scipy.signal import argrelextrema
from mpmath import mp, mpf
mp.dps = 50
varepsilon = pow(10, -2); j = 2.5*pow(10, -4); e = 3.0; tau = 0.5; delta = 2.0
u0 = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) / 6
u = -math.sqrt(-1 + math.sqrt(varepsilon ** 2 + 12) / varepsilon) * math.sqrt(2) * (1 + delta) / 6
v = 1 / (1 - 2 / e) * math.sqrt(j ** 2 + (1 - 2 / e) * (e ** 2 * u ** 2 + 1))
y8 = lambda y1,y5,y7: 1 / (1 - 2 / y1) * math.sqrt(y5 ** 2 + (1 - 2 / y1) * (1 + y1 ** 2 * y7 ** 2))
E0 = lambda y1,y8: (1 - 2 / y1) * y8
Phi0 = lambda y1,y7: y1 ** 2 * y7
y08 = y8(y1=e, y5=j, y7=u0);
E = E0(y1=e, y8=y08); Phi = Phi0(y1=e, y7=u0)
# initial values
z01 = e; z03 = 0.0; z04 = 0.0; z05 = j; z07 = u0; z08 = y08;
def model(x, z, varepsilon, E, Phi):
z1, z3, z4, z5, z7, z8 = z[0], z[1], z[2], z[3], z[4], z[5]
p1 = -z1*z5/(z1 - 2);
p3 = -pow(z1, 2) *z7;
p4 = z8*(1 - 2/z1);
Q1 = -pow(z5, 2)/(z1*(z1 - 2)) + (pow(z8, 2)/pow(z1, 3) - pow(z7, 2))*(z1 - 2);
Q3 = 2*z5*z7/z1;
Q4 = 2*z5*z8/(z1*(z1 - 2));
c1 = z1*z7*varepsilon;
c3 = -z1*z5*varepsilon;
C = z7*varepsilon/z1 - z8*(1 - 2/z1);
d1 = -z1*z8*varepsilon;
d3 = z1*z5*varepsilon;
B = pow(z1, 2)*z7 - z8*varepsilon*(1 - 2/z1);
Omega = 1/(c1*d3*p3+c3*d1*p4-c3*d3*p1);
# differential equations
dz1dx = z5;
dz3dx = z7;
dz4dx = z8;
dz5dx = Omega*(-Q1*c1*d3*p3 - Q1*c3*d1*p4 + Q1*c3*d3*p1 + B*c3*p4 + C*d3*p3 + E*d3*p3 - Phi*c3*p4);
dz7dx = -Omega*(Q3*c1*d3*p3 + Q3*c3*d1*p4 - Q3*c3*d3*p1 + B*c1*p4 - C*d1*p4 + C*d3*p1 - E*d1*p4 + E*d3*p1 - Phi*c1*p4);
dz8dx = Omega*(-Q4*c1*d3*p3 - Q4*c3*d1*p4 + Q4*c3*d3*p1 + B*c1*p3 - B*c3*p1 - C*d1*p3 - E*d1*p3 - Phi*c1*p3 + Phi*c3*p1);
dzdx = [dz1dx, dz3dx, dz4dx, dz5dx, dz7dx, dz8dx]
return dzdx
z0 = [z01, z03, z04, z05, z07, z08]
if __name__ == '__main__':
# Start by specifying the integrator:
# use ``vode`` with "backward differentiation formula"
r = integrate.ode(model).set_integrator('vode', method='bdf')
r.set_f_params(varepsilon, E, Phi)
# Set the time range
t_start = 0.0
t_final = 0.1
delta_t = 0.00001
# Number of time steps: 1 extra for initial condition
num_steps = np.floor((t_final - t_start)/delta_t) + 1
r.set_initial_value(z0, t_start)
t = np.zeros((int(num_steps), 1), dtype=np.float64)
Z = np.zeros((int(num_steps), 6,), dtype=np.float64)
t[0] = t_start
Z[0] = z0
k = 1
while r.successful() and k < num_steps:
r.integrate(r.t + delta_t)
# Store the results to plot later
t[k] = r.t
Z[k] = r.y
k += 1
# All done! Plot the trajectories:
Z1, Z3, Z4, Z5, Z7, Z8 = Z[:,0], Z[:,1] ,Z[:,2], Z[:,3], Z[:,4], Z[:,5]
plt.plot(t,Z1,'r-',label=r'$r(s)$')
plt.grid('on')
plt.ylabel(r'$r$')
plt.xlabel('proper time s')
plt.legend(loc='best')
plt.show()
plt.plot(t,Z5,'r-',label=r'$\frac{dr}{ds}$')
plt.grid('on')
plt.ylabel(r'$\frac{dr}{ds}$')
plt.xlabel('proper time s')
plt.legend(loc='best')
plt.show()
plt.plot(t, Z7, 'r-', label=r'$\frac{dϕ}{ds}$')
plt.grid('on')
plt.xlabel('proper time s')
plt.ylabel(r'$\frac{dϕ}{ds}$')
plt.legend(loc='upper center')
plt.show()
However, reviewing the solutions obtained by the library scipy,
I encountered the problem of inconsistency of the solutions obtained by scipy and Maple. The essence of the problem is that the solutions are quickly oscillating and the Maple catches these oscillations with high precision using Rosenbrock's method. While Pythonn has problems with this using Backward Differentiation Methods:
r = integrate.ode(model).set_integrator('vode', method='bdf')
http://www.scholarpedia.org/article/Backward_differentiation_formulas
I tried all the modes of integrating: “vode” ; “zvode”; “lsoda” ; “dopri5” ; “dop853” and I found that the best suited mode “vode” however, still does not meet my needs...
https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html
So this method catches oscillations in the range j ~ 10^{-5}-10^{-3}..While the maple shows good results for any j.
I present the results obtained by scipy for j ~ 10^{-2}:
enter image description here
enter image description here
and the results obtained by Maple for j ~ 10^{-2}:
enter image description here
enter image description here
It is important that oscillations are physical solutions! That is, the Python badly captures oscillations for j ~ 10^{-2}((. Can anyone tell me what I'm doing wrong?? how to look at the absolute error of integration?