Couple Differential Equations using Python - python

I am trying to solve a system of geodesics orbital equations using python. They are coupled ordinary equations. I've tried different approaches, but they all yielded me a wrong shape (the shape should be some periodic function when plotting r and phi). Any idea on how to do this?
Here are my constants
G = 4.30091252525 * (pow(10, -3)) #Gravitational constant in (parsec*km^2)/(Ms*sec^2)
c = 0.0020053761 #speed of light , AU/sec
M = 170000 #mass of the central body, in solar masses
m = 10 #mass of the orbiting body, in solar masses
rs = 2 * G * M / pow(c, 2) #Schwarzschild radius
Lz= 0.000024 #Angular momemntum
h = Lz / m #Just the constant in equation
E= 1.715488e-007 #energy
And initial conditions are:
Y(0) = rs
Phi(0) = math.pi
Orbital equations
The way I tried to do it:
def rhs(t, u):
Y, phi = u
dY = np.sqrt((E**2 / (m**2 * c**2) - (1 - rs / Y) * (c**2 + h**2 / Y**2)))
dphi = L / Y**2
return [dY, dphi]
Y0 = np.array([rs,math.pi])
sol = solve_ivp(rhs, [1, 1000], Y0, method='Radau', dense_output=True)

It seems like you are looking at the spacial coordinates in an invariant plane of the geodesic equations of an object moving in Schwarzschild gravity.
One can use many different methods, which preserve as much of the underlying geometric structure of the model as possible, like symplectic geometric integrators or perturbation theory. As Lutz Lehmann pointed out in the comments, the default method for 'solve_ivp' uses as default the Dormand-Prince (4)5 stepper that utilizes the extrapolation mode, that is, the order 5 step, with the step size selection driven by the error estimate of the order 4 step.
Warning: your initial condition for Y equals Schwarzschild's radius, so these equations may fail or require special treatment (especially the time component of the equations, which you have not included here!) It may be that you have to switch to different coordinates, that remove the singularity at the even horizon. Moreover, the solutions may not be periodic curves, but quasi-periodic, so they may not close up nicely.
For a quick and dirty treatment, but possibly a fairly accurate one, I would differentiate the first equation
(dr / dtau)^2 = (E2_mc2 - c2) + (2*GM)/r - (h^2)/(r^2) + (r_schw*h^2)/(r^3)
with respect to the proper time tau, then cancel out the first derivative dr / dtau with respect to r on both sides, and end up with an equation with second derivative for the radius r on the left. Then turn this second derivative equation into a pair of first derivative equations for r and its rate of change v, i.e
dphi / dtau = h / (r^2)
dr / dtau = v
dv / dtau = - GM / (r^2) + h^2 / (r^3) - 3*r_schw*(h^2) / (2*r^4)
and calculate from the original equation for r and its first derivative dr / dtau an initial value for the rate of change v = dr / dtau, i.e. I would solve for v the equations with r=r0:
(v0)^2 = (E2_mc2 - c2) + (2*GM)/r0 - (h^2)/(r0^2) + (r_schw*h^2)/(r0^3)
Maybe some kind of python code like this may work:
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
#from ode_helpers import state_plotter
# u = [phi, Y, V, t] or if time is excluded
# u = [phi, Y, V]
def f(tau, u, param):
E2_mc2, c2, GM, h, r_schw = param
Y = u[1]
f_phi = h / (Y**2)
f_Y = u[2] # this is the dr / dt auxiliary equation
f_V = - GM / (Y**2) + h**2 / (Y**3) - 3*r_schw*(h**2) / (2*Y**4)
#f_time = (E2_mc2 * Y) / (Y - r_schw) # this is the equation of the time coordinate
return [f_phi, f_Y, f_V] # or [f_phi, f_Y, f_V, f_time]
# from the initial value for r = Y0 and given energy E,
# calculate the initial rate of change dr / dtau = V0
def ivp(Y0, param, sign):
E2_mc2, c2, GM, h, r_schw = param
V0 = math.sqrt((E2_mc2 - c2) + (2*GM)/Y0 - (h**2)/(Y0**2) + (r_schw*h**2)/(Y0**3))
return sign*V0
G = 4.30091252525 * (pow(10, -3)) #Gravitational constant in (parsec*km^2)/(Ms*sec^2)
c = 0.0020053761 #speed of light , AU/sec
M = 170000 #mass of the central body, in solar masses
m = 10 #mass of the orbiting body, in solar masses
Lz= 0.000024 #Angular momemntum
h = Lz / m #Just the constant in equation
E= 1.715488e-007 #energy
c2 = c**2
E2_mc2 = (E**2) / (c2*m**2)
GM = G*M
r_schw = 2*GM / c2
param = [E2_mc2, c2, GM, h, r_schw]
Y0 = r_schw
sign = 1 # or -1
V0 = ivp(Y0, param, sign)
tau_span = np.linspace(1, 1000, num=1000)
u0 = [math.pi, Y0, V0]
sol = solve_ivp(lambda tau, u: f(tau, u, param), [1, 1000], u0, t_eval=tau_span)
Double check the equations, mistakes and inaccuracies are possible.

Related

solve two simultaneous equations: one contains a Python function

On the below map, I have two known points (A and B) with their coordinates (longitude, latitude). I need to derive the coordinates of a point C which is on the line, and is 100 kilometres away from A.
First I created a function to calculate the distances between two points in kilometres:
# pip install haversine
from haversine import haversine
def get_distance(lat_from,long_from,lat_to,long_to):
distance_in_km = haversine((lat_from,long_from),
(lat_to, long_to),
unit='km')
return distance_in_km
Then using the slope and the distance, the coordinates of point C should be the solution to the below equations:
# line segment AB and AC share the same slope, so
# (15.6-27.3)/(41.6-34.7) = (y-27.3)/(x-34.7)
# the distance between A and C is 100 km, so
# get_distance(y,x,27.3,34.7) = 100
Then I try to solve these two equations in Python:
from sympy import symbols, Eq, solve
slope = (15.6-27.3)/(41.6-34.7)
x, y = symbols('x y')
eq1 = Eq(y-slope*(x-34.7)-27.3)
eq2 = Eq(get_distance(y,x,34.7,27.3)-100)
solve((eq1,eq2), (x, y))
The error is TypeError: can't convert expression to float. I may understand the error, because the get_distance function is expecting inputs as floats, while my x and y in eq2 are sympy.core.symbol.Symbol.
I tried to add np.float(x), but the same error remains.
Is there a way to solve equations like these? Or do you have better ways to achieve what is needed?
Many thanks!
# there is a simple example of solving equations:
from sympy import symbols, Eq, solve
x, y = symbols('x y')
eq1 = Eq(2*x-y)
eq2 = Eq(x+2-y)
solve((eq1,eq2), (x, y))
# output: {x: 2, y: 4}
You can directly calculate that point. We can implement a python version of the intermediate calculation for lat long.
Be aware this calculations assume the earth is a sphere, and takes the curve into account, i.e. this is not a Euclidean approximation like your original post.
Say we have two (lat,long) points A and B;
import numpy as np
A = (52.234869, 4.961132)
B = (46.491267, 26.994655)
EARTH_RADIUS = 6371.009
We can than calculate the intermediate point fraction f by taking 100/distance-between-a-b-in-km
from sklearn.neighbors import DistanceMetric
dist = DistanceMetric.get_metric('haversine')
point_1 = np.array([A])
point_2 = np.array([B])
delta = dist.pairwise(np.radians(point_1), np.radians(point_2) )[0][0]
f = 100 / (delta * EARTH_RADIUS)
phi_1, lambda_1 = np.radians(point_1)[0]
phi_2, lambda_2 = np.radians(point_2)[0]
a = np.sin((1-f) * delta) / np.sin(delta)
b = np.sin( f * delta) / np.sin(delta)
x = a * np.cos(phi_1) * np.cos(lambda_1) + b * np.cos(phi_2) * np.cos(lambda_2)
y = a * np.cos(phi_1) * np.sin(lambda_1) + b * np.cos(phi_2) * np.sin(lambda_2)
z = a * np.sin(phi_1) + b * np.sin(phi_2)
phi_n = np.arctan2(z, np.sqrt(x**2 + y**2) )
lambda_n = np.arctan2(y,x)
The point C, going from A to B, with 100 km distance from A, is than
C = np.degrees( phi_n ), np.degrees(lambda_n)
In this case
(52.02172458025681, 6.384361456573444)

Electrostatic Force - Simulate Trajectory of Test Particle using Runge Kutta - Force always Repels

In the center of a 2D-Plane a positive static charge Q is placed with position r_prime. This charge creates a static electrical Field E.
Now i want to place a test particle with charge Q and position vector r in this static E-Field and compute its trajectory using the 4th order Runge-Kutta method.
For the initial conditions
Q = 1, r_prime=(0,0)
q = -1, r = (9, 0), v = (0,0)
one would expect, that the negative charged test particle should move towards the positive charge in the center. Instead i get the following result for the time evolution of the test particles x component:
[9.0,
9.0,
8.999876557604697,
8.99839964155741,
8.992891977287334,
8.979313669171093,
8.95243555913327,
8.906134626441052,
8.83385018027209,
8.729257993736123,
8.587258805422984,
8.405449606446608,
8.186368339303788,
7.940995661159361,
7.694260386250479,
7.493501689700884,
7.420415546859942,
7.604287312065716,
8.226733652039988,
9.498656905483394,
11.60015461031076,
14.621662121713964,
18.56593806599109,
....
The results of the first iteration steps show the correct behavior, but then the particle is strangely repelled to infinity. There must be a major flaw in my implementation of the Runge-Kutta Method, but I checked it several times and I cant find any...
Could someone please take a quick look over my implementation and see if they can find a bug.
"""
Computes the trajectory of a test particle with Charge q with position vector r = R[:2] in a
electrostatic field that is generated by charge Q with fixed position r_prime
"""
import numpy as np
import matplotlib.pyplot as plt
def distance(r, r_prime, n):
"""
computes the euclidean distance to the power n between position x and x_prime
"""
return np.linalg.norm(r - r_prime)**n
def f(R, r_prime, q, Q):
"""
The equations of motion for the particle is given by:
d^2/dt^2 r(t) = F = constants * q * Q * (r - r_prime)/||r - r_prime||^3
To apply the Runge-Kutta-Method we transform the above (constants are set to 1)
two dimensional second order ODE into four one dimensional ODEs:
d/dt r_x = v_x
d/dt r_y = v_y
d/dt v_x = q * Q * (r_x - r_prime_x)/||r - r_prime||^3
d/dt v_y = q * Q * (r_y - r_prime_y)/||r - r_prime||^3 '''
"""
r_x, r_y = R[0], R[1]
v_x, v_y = R[2], R[3]
dist = 1 / distance(np.array([r_x, r_y]), r_prime, 3)
# Right Hand Side of the 1D Odes
f1 = v_x
f2 = v_y
f3 = q * Q * dist * (r_x - r_prime[0])
f4 = q * Q * dist * (r_y - r_prime[1])
return np.array([f1, f2, f3, f4], float)
# Constants for the Simulation
a = 0.0 # t_0
b = 10.0 # t_end
N = 100 # number of iterations
h = (b-a) / N # time step
tpoints = np.arange(a,b+h,h)
# Create lists to store the computed values
xpoints, ypoints = [], []
vxpoints, vypoints = [], []
# Initial Conditions
Q, r_prime = 1, np.array([0,0], float) # charge and position of particle that creates the static E-Field
q, R = -1, np.array([9,0,0,0], float) # charge and its initial position + velocity r=R[:2], v=[2:]
for dt in tpoints:
xpoints.append(R[0])
ypoints.append(R[1])
vxpoints.append(R[2])
vypoints.append(R[3])
# Runge-Kutta-4th Order Method
k1 = dt * f(R, r_prime, q, Q)
k2 = dt * f(R + 0.5 * k1, r_prime, q, Q)
k3 = dt * f(R + 0.5 * k2, r_prime, q, Q)
k4 = dt * f(R + k3, r_prime, q, Q)
R += (k1 + 2*k2 * 2*k3 + k4)
plt.plot(tpoints, xpoints) # should converge to 0

Stop Integrating when Output Reaches 0 in scipy.integrate.odeint

I've written a code which looks at projectile motion of an object with drag. I'm using odeint from scipy to do the forward Euler method. The integration runs until a time limit is reached. I would like to stop integrating either when this limit is reached or when the output for ry = 0 (i.e. the projectile has landed).
def f(y, t):
# takes vector y(t) and time t and returns function y_dot = F(t,y) as a vector
# y(t) = [vx(t), vy(t), rx(t), ry(t)]
# magnitude of velocity calculated
v = np.sqrt(y[0] ** 2 + y[1] **2)
# define new drag constant k
k = cd * rho * v * A / (2 * m)
return [-k * y[0], -k * y[1] - g, y[0], y[1]]
def solve_f(v0, ang, h0, tstep, tmax=1000):
# uses the forward Euler time integration method to solve the ODE using f function
# create vector for time steps based on args
t = np.linspace(0, tmax, tmax / tstep)
# convert angle from degrees to radians
ang = ang * np.pi / 180
return odeint(f, [v0 * np.cos(ang), v0 * np.sin(ang), 0, h0], t)
solution = solve_f(v0, ang, h0, tstep)
I've tried several loops and similar to try to stop integrating when ry = 0. And found this question below but have not been able to implement something similar with odeint. solution[:,3] is the output column for ry. Is there a simple way to do this with odeint?
Does scipy.integrate.ode.set_solout work?
Checkout scipy.integrate.ode here. It is more flexible than odeint and helps with what you want to do.
A simple example using a vertical shot, integrated until it touches ground:
from scipy.integrate import ode, odeint
import scipy.constants as SPC
def f(t, y):
return [y[1], -SPC.g]
v0 = 10
y0 = 0
r = ode(f)
r.set_initial_value([y0, v0], 0)
dt = 0.1
while r.successful() and r.y[0] >= 0:
print('time: {:.3f}, y: {:.3f}, vy: {:.3f}'.format(r.t + dt, *r.integrate(r.t + dt)))
Each time you call r.integrate, r will store current time and y value. You can pass them to a list if you want to store them.
Let's solve this as the boundary value problem that it is. We have the conditions x(0)=0, y(0)=h0, vx(0)=0, vy(0)=0 and y(T)=0. To have a fixed integration interval, set t=T*s, which means that dx/ds=T*dx/dt=T*vx etc.
def fall_ode(t,u,p):
vx, vy, rx, ry = u
T = p[0]
# magnitude of velocity calculated
v = np.hypot(vx, vy)
# define new drag constant k
k = cd * rho * v * A / (2 * m)
return np.array([-k * vx, -k * vy - g, vx, vy])*T
def solve_fall(v0, ang, h0):
# convert angle from degrees to radians
ang = ang * np.pi / 180
vx0, vy0 = v0*np.cos(ang), v0*np.sin(ang)
def fall_bc(y0, y1, p): return [ y0[0]-vx0, y0[1]-vy0, y0[2], y0[3]-h0, y1[3] ]
t_init = [0,1]
u_init = [[0,0],[0,0],[0,0], [h0,0]]
p_init = [1]
res = solve_bvp(fall_ode, fall_bc, t_init, u_init, p_init)
print res.message
if res.success:
print "time to ground: ", res.p[0]
# res.sol is a dense output, evaluation interpolates the computed values
return res.sol
sol = solve_fall(300, 30, 20)
s = np.linspace(0,1,501)
u = sol(s)
vx, vy, rx, ry = u
plt.plot(rx, ry)

Why does my windowed-sinc function have non-linear phase?

Similar to many tutorials on the web, I've tried implementing a windowed-sinc lowpass filter using the following python functions:
def black_wind(w):
''' blackman window of width w'''
samps = np.arange(w)
return (0.42 - 0.5 * np.cos(2 * np.pi * samps/ (w-1)) + 0.08 * np.cos(4 * np.pi * samps/ (w-1)))
def lp_win_sinc(tw, fc, n):
''' lowpass sinc impulse response
Parameters:
tw = approximate transition width [fraction of nyquist freq]
fc = cutoff freq [fraction of nyquest freq]
n = length of output.
Returns:
s = impulse response of windowed-sinc filter appended zero-padding
to make len(s) = n
'''
m = int(np.ceil( 4./tw / 2) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * black_wind(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return s
For input: 'tw = 0.05', 'fc = 0.2', 'n = 6000', the magnitude of the fft seems reasonable.
tw = 0.05
fc = 0.2
n = 6000
lp = lp_win_sinc(tw, fc, n)
f_lp = np.fft.rfft(lp)
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.abs(f_lp))
magnitude of lowpass filter response
however, the phase is non-linear above ~fc.
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.unwrap(np.angle(f_lp)))
phase of lowpass filter response
Given the symmetry of the non-zero-padded portion of the impulse response, I would expect the resulting phase to be linear. Can someone explain what is going on? Perhaps I'm using a numpy function incorrectly, or maybe my expectations are incorrect. I'm very grateful for any help.
***********************EDIT***********************
based on some of the helpful comments to this question and some more work, I wrote a function that produces zero phase delay and is therefore a bit easier to interpret the np.angle() results.
def lp_win_sinc(tw, fc, n):
m = int(np.ceil( 2./tw) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * np.blackman(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return np.roll(s, -m/2)
The main change here is using np.roll() to place the line of symmetry at t=0.
The magnitudes in the stop band are crossing zero. The phase of the coefficient after a zero crossing will jump by 180 degrees, which is confusing np.angle()/np.unwrap(). -1*180° = 1*0°
The phase as shown in your graph is in fact linear. It's a constant slope in the passband, corresponding to a constant delay in the time domain. It's a much steeper slope, which renders as wrapping around at 2pi boundaries, in the stopband. But the value of the phase in the stopband is not particularly important since those frequencies aren't going to come through the filter anyway.

Random initial velocities within Plummer model in Python

I want to randomly distribute N particles within a volume such that they satisfy the Plummer potential distribution. I trying to work from "The Art of Computational Science" by Hut, which has a description but I can't seem to implement it. Where I differ from Hut is that I require 3 velocity components for each particle. Here's what I have done so far:
f=0
g=0.1
if g >f*f*(1-f*f)**3.5:
f=np.random.uniform(0,1,N)
g=np.random.uniform(0,0.1,N)
vel_x= f*np.sqrt(2)*(1+x*x)**(-0.25)
vel_y= f*np.sqrt(2)*(1+y*y)**(-0.25)
vel_z= f*np.sqrt(2)*(1+z*z)**(-0.25)
vel = np.zeros((N,3))
vel[:,0]=vel_x
vel[:,1]=vel_y
vel[:,2]=vel_z
However, when I run the energy check described by Hut, such that the kinetic energy ~0.147 in N body units, this code fails. Any advice on where Im going wrong would be greatly appreciated
You are probably misreading the Ruby code in Hut's book since it is also generating 3-dimensional velocity vectors:
x = 0.0
y = 0.1
while y > x*x*(1.0-x*x)**3.5
x = frand(0,1)
y = frand(0,0.1)
end
velocity = x * sqrt(2.0) * ( 1.0 + radius*radius)**(-0.25)
theta = acos(frand(-1, 1))
phi = frand(0, 2*PI)
b.vel[0] = velocity * sin( theta ) * cos( phi )
b.vel[1] = velocity * sin( theta ) * sin( phi )
b.vel[2] = velocity * cos( theta )
The first part generates |v| by rejection sampling from the velocity distribution. The second part generates a random direction in space (in polar coordinates) and the last part of the code transforms from polar to Cartesian coordinates.
Your code does something completely different. You should instead adapt the code fragment shown above in Python, e.g.:
f = 0.0
g = 0.1
while g > f*f*(1.0-f*f)**3.5:
f = np.random.uniform(0,1)
g = np.random.uniform(0,0.1)
velocity = f * np.sqrt(2.0) * (1.0 + radius*radius)**(-0.25)
theta = np.arccos(np.random.uniform(-1, 1))
phi = np.random.uniform(0, 2*np.pi)
vel[n,0] = velocity * np.sin(theta) * np.cos(phi)
vel[n,1] = velocity * np.sin(theta) * np.sin(phi)
vel[n,2] = velocity * np.cos(theta)
The code could possibly be vectorised, but in reality it makes little sense since the rejection sampling is not vectorisable (it might and most likely will take different number of iterations for each sample).

Categories