I am coding a program to find how far a bullet shoots at a 20 degree angle with various parameters with and without drag. For some reason my program immediately calculates the y value of the bullet as being below zero and I think it may be a problem with the physics equations I have used. I don't have the strongest background in physics so after trying my best to figure out where I went wrong I figured I would ask for help. In the graph below I am attempting to display the different distances associated with different timesteps.
import math
import matplotlib.pyplot as plt
rho = 1.2754
Af = math.pi*(16*0.0254)**2/4
g = -9.8 #m/s^2
V = 250.0 #m/s
#equation of motion
def update(r,V,a,dt):
return r+V*dt +.5*a*dt*dt, V + a*dt
#drag force from velocity vector return x and y
def drag(Vx,Vy):
Vmag = math.sqrt(Vx**2+Vy**2)
Ff = .5*Cd*rho*Af*Vmag**2
return -Ff * Vx/Vmag, -Ff *Vy/Vmag
def getTrajectory(dt, th):
#initialize problem [time,x,y,Vx,Vy]
state = [[0.0,0.0,0.0, V*math.cos(th), V*math.sin(th)]]
while state[-1][2] >= 0 : # while y > 0
time = state[-1][0] + dt
Fd = drag(state[-1][3],state[-1][4]) # Vx,Vy
ax = Fd[0]/m # new x acceleration
ay = (Fd[1]/m) +g # new y acceleration
nextX, nextVx = update(state[-1][1], state[-1][3],ax,dt) #x,Vx,acceleration x, dt
nextY, nextVy = update(state[-1][2], state[-1][4],ay,dt) #y,Vy, acceleration y, dt
state.append([time,nextX,nextY,nextVx,nextVy])
# linear interpolation
dtf = -state[-1][2]*(state[-2][0]-state[-1][0])/(state[-2][2]-state[-1][2])
xf,vxf = update(state[-1][1],state[-1][3], ax, dtf)
print(xf,vxf)
yf,vyf = update(state[-1][2],state[-1][4], ay, dtf)
print(yf,vyf)
state[-1] = [time+dtf,xf,yf,vxf,vyf]
return state
delt = 0.01 #timestep
th = 20.0 * math.pi/180.0 #changing to radians
m = 2400 #kg
Cd = 1.0
tsteps = [0.001,0.01,0.1,1.0,3.0] # trying different timesteps to find the optimal one
xdistance = [getTrajectory(tstep,th)[-1][1] for tstep in tsteps]
plt.plot(tsteps,xdistance)
plt.show()
A print of the state variable which contains all of the values of time,x,y,Vx, and Vy after one timestep yields:
[[0.0, 0.0, 0.0, 234.9231551964771, 85.50503583141717]
,[0.0, 0.0, -9.540979117872439e-18, 234.9231551964771, 85.50503583141717]]
The first array is the initial conditions with the Vx and Vy broken up into their vectors calculated by the 20 degree angle. The second array should have the bullet flying off with a positive x and y but for some reason the y value is calculated to be -9.54e-18 which quits the getTrajectory method as it is only supposed to continue while y > 0
In order to simplify my problem, I think it may help to look specifically at my update function.
def update(initial,V,a,dt): #current x or y, Velocity in the x or y direction, acceleration in x or y, timestep
return initial+V*dt +.5*a*dt*dt, V + a*dt
This function is supposed to return an updated x,Vx or an updated y,Vy depending on what is passed. I think this may be where the problem is in the physics equations but I'm not completely sure.
I think I see what's going on: in your while loop, you first make a forward time step to compute the kinematic variables at, say, t = 0.001 from their values at t = 0, and then you set dtf to a negative value and make a backward time step to compute new values at t = 0 from the values you just computed at t = 0.001 (or whatever). I'm not sure why you do that, but at any rate your update() function is using the Euler method which is numerically unstable - as you take more and more steps, it often diverges more and more from the true solution in somewhat unpredictable ways. In this case, that turns up as a negative y position, instead of setting it back to 0 as you might expect.
No method of numerical integration is going to give you perfect results, so you can always expect a little bit of inaccuracy, which may be positive or negative. Just because something is a little bit negative isn't necessarily cause for concern. (But it could be, depending on the situation.) That being said, you can get better results by using a more robust method of integration, like Runge-Kutta 4th-order (a.k.a. RK4) or a symplectic integrator. But those are somewhat advanced techniques, so I wouldn't worry about them too much for now.
As for the problem at hand, I wonder if you might have a sign error somewhere in your equations, possibly in the line that sets dtf. I can't say for sure, though, because I really don't understand how getTrajectory() is supposed to work.
This seems to be an indentation error. The part of determining the location of the zero crossing of y starting with the linear interpolation is surely not meant to be part of the integration loop, especially not the return statement. Repairing this indentation level gives the output
(3745.3646817336303, 205.84399509886356)
(6.278677479133594e-07, -81.7511764639779)
(3745.2091465034314, 205.8432621147464)
(8.637877372419503e-05, -81.75094436152747)
(3743.6311931495325, 205.83605594288176)
(0.01056407140434586, -81.74756057157798)
(3727.977248738945, 205.75827166721402)
(0.1316764390634169, -81.72155620161116)
(3669.3437231150106, 205.73704583954634)
(10.273469073526238, -80.52193274789204)
Related
I want to fit a 2D shape in an image. In the past, I have successfully done this using lmfit in Python and wrapping the 2D function/data to 1D. On that occasion, the 2D model was a smooth function (a ring with a gaussian profile). Now I am trying to do the same but with a "non-smooth function" and it is not working as expected.
This is what I am trying to do (guessed and fitted are the same):
I have shifted the guessed parameters in purpose to easily see if it moves as expected, and nothing happens.
I have noticed that if instead of a swiss flag I use a 2D gaussian, which is a smooth function, this works fine (see MWE below):
So I guess the problem is related to the fact that the Swiss flag function is not smooth. I have tried to make it smooth by adding a gaussian filter (blur) but it still did not work, even though the swiss flag plot became very blurred.
After some time I came to the thought that maybe the step size that is using lmfit (o whoever is in the background) is too small to produce any change in the swiss flag. I would like to try to increase the step size to 1, but I don't know exactly how to do that.
This is my MWE (sorry, it is still quite long):
import numpy as np
import myplotlib as mpl # https://github.com/SengerM/myplotlib
import lmfit
def draw_swiss_flag(fig, center, side, **kwargs):
fig.plot(
np.array(2*[side] + 2*[side/2] + 2*[-side/2] + 2*[-side] + 2*[-side/2] + 2*[side/2] + 2*[side]) + center[0],
np.array([0] + 2*[side/2] + 2*[side] + 2*[side/2] + 2*[-side/2] + 2*[-side] + 2*[-side/2] + [0]) + center[1],
**kwargs,
)
def swiss_flag(x, y, center: tuple, side: float):
# x, y numpy arrays.
if x.shape != y.shape:
raise ValueError(f'<x> and <y> must have the same shape!')
flag = np.zeros(x.shape)
flag[(center[0]-side/2<x)&(x<center[0]+side/2)&(center[1]-side<y)&(y<center[1]+side)] = 1
flag[(center[1]-side/2<y)&(y<center[1]+side/2)&(center[0]-side<x)&(x<center[0]+side)] = 1
return flag
def gaussian_2d(x, y, center, side):
return np.exp(-(x-center[0])**2/side**2-(y-center[1])**2/side**2)
def wrapper_for_lmfit(x, x_pixels, y_pixels, function_2D_to_wrap, *params):
pixel_number = x # This is the pixel number in the data array
# x_pixels and y_pixels are the number of pixels that the image has. This is needed to make the mapping.
if (pixel_number > x_pixels*y_pixels - 1).any():
raise ValueError('pixel_number (x) > x_pixels*y_pixels - 1')
x = np.array([int(p%x_pixels) for p in pixel_number])
y = np.array([int(p/x_pixels) for p in pixel_number])
return function_2D_to_wrap(x, y, *params)
data = np.genfromtxt('data.txt') # Read data
data -= data.min().min()
data = data/data.max().max()
guessed_center = (data.sum(axis=0).argmax()+11, data.sum(axis=1).argmax()+11) # I am adding 11 in purpose.
guessed_side = 19
model = lmfit.Model(lambda x, xc, yc, side: wrapper_for_lmfit(x, data.shape[1], data.shape[0], swiss_flag, (xc,yc), side))
params = model.make_params()
params['xc'].set(value = guessed_center[0], min = 0, max = data.shape[1])
params['yc'].set(value = guessed_center[1], min = 0, max = data.shape[0])
params['side'].set(value = guessed_side, min = 0)
fit_results = model.fit(data.ravel(), params, x = [i for i in range(len(data.ravel()))])
mpl.manager.set_plotting_package('matplotlib')
fit_plot = mpl.manager.new(
title = 'Data vs fit',
aspect = 'equal',
)
fit_plot.colormap(data)
draw_swiss_flag(fit_plot, guessed_center, guessed_side, label = 'Guessed')
draw_swiss_flag(fit_plot, (fit_results.params['xc'],fit_results.params['yc']), fit_results.params['side'], label = 'Fitted')
swiss_flag_plot = mpl.manager.new(
title = 'Swiss flag plot',
aspect = 'equal',
)
xx, yy = np.meshgrid(np.array([i for i in range(data.shape[1])]), np.array([i for i in range(data.shape[0])]))
swiss_flag_plot.colormap(
z = swiss_flag(xx, yy, center = (fit_results.params['xc'],fit_results.params['yc']), side = fit_results.params['side']),
)
mpl.manager.show()
and this is the content of data.txt.
It seems your code is all fine. The issue is, as you already guessed, that the algorithm used by lmfit is not dealing well with non-smooth data.
By default lmfit uses a leas squares method. Let's change it to method 'differential_evolution' instead.
params['side'].set(value=guessed_side, min=0, max=len(data))
fit_results = model.fit(data.ravel(), params,
x=[i for i in range(len(data.ravel()))],
method='differential_evolution'
)
Note that I needed to add some finite value for the max value to prevent a "differential_evolution requires finite bound for all varying parameters" message.
After switching to the evolutionary algorithm, the fit now looks like this:
All the fitting algorithms in lmfit (and scipy.optimize for that matter), and including the "global optimizers" really work on continuous variables (double precision). When trying to find the optimal parameter values, most of the algorithms will make a very small step (at the ~1.e-7 level) in the value to determine the derivative which will then be used to make the next guess of the optimal values.
The problem you're seeing is that your model function uses the parameter value as discrete values - as the index of an array using int(). If a small change is made to the parameter value, no change in the result will be detected - the algorithm will decide that the fit result does not depend on small changes to that value.
The so-called "global solvers" like differential evolution, basin-hopping, shgo, take the view that the derivative approach can lead to "false minima" and so will "spray parameter space" with lots of candidate values and then use different strategies to refine the best of those results to find the optimal values. Generally speaking, these are much slower to run (OTOH runtime is cheap!) and very good for problems where there may be multiple "minima" and you really want to find the best of these, or where getting a decent guess of starting values is very hard.
For your problem, it is pretty clear that you can guess starting values (the center pixels must be on the image, say, so maybe guess "the middle"), and it seems likely from the image that there are not a lot of false minima that might be found. That means that the expense of a global solver might not be needed.
Another approach would be to allow your shaped object to be centered at any continuous center in the image, and not only at integer pixels. Of course, you do have to map that to the discrete image, but it doesn't need to fully on/off. Using a sigmoidal functions like scipy.special.erf() and erfc() will allow you to still have a transition from "on" to "off", but with a small but finite width, bleeding into adjacent pixels. And that would be enough to allow a fit to find a continuous (and so, sub-pixel!) value for the center position. In 1-d, that might look like::
from scipy.special import erf
def smoothed_window(x, edge1, edge2, width):
return (erf((x-edge1)/width) + erf((edge2-x)/width))/2.0
For integer x values, a width of 0.5 (that is, half a pixel) will almost certainly allow a fit to find sub-integer values for edge1 and edge2. (Aside: either force the width parameter to be fixed or force it to be positive, eithr in the code or at the Parameter level).
I have not tried to extend that to your more complicated "swiss flag" function, but it should be possible and also work for fitting center values.
I'm trying to solve numerically a parabolic type of Partial differential equation(PDE):
u't=u''xx-u(1-u)(0.3-u)
with Neumann boundary conditions and a step like function as an initial condition.
Expected plot result is here
It may be a bit misleading: red - is an initial step-like function and smooth lines are different iteration in time after (like t=50,100, 150..))
The idea is based on the method of lines: make the space (x) discrete, use finite difference methods (second order is written by the symmetric-centered approximation), standard Python scipy odeint(which can use implicit methods if I'm not mistaken)and then to solve the obtained set of ODEs. I think this is pretty standard way of how to deal with such ordinary PDEs(correct me if I'm wrong).
Below are code snippets I'm running.
In a nutshell, I want to represent the above continuous equation in a matrix based form:
[u]'t=A*[u]+f([u])
where A is a matrix for second order symmetric-cantered approximation of u''xx(after multiplying by [u])
Unfortunately, there is almost nothing qualitatively significant happening with initial conditions after running the code, so I'm confused whether there is a problem with my script or in an approach itself.
Defining the right side u(1-u)(0.3-u):
def f(u, h):
puff = []
for i in u:
puff.append(i*(1. - i)*(0.3 - i))
puff[0] = (h/2.)*puff[0] #this is to include
puff[len(puff)-1] = (h/2.)*puff[len(puff)-1]
return puff
Defining spatial resolution:
h = 10.
x = np.linspace(0., 100., num=h)
This is to define second derivative by the means of finite difference(A*[u]):
diag = [-2]*(len(x))
diag[0] = -h
diag[len(x)-1] = h**2
diag_up = [1]*(len(x)-1)
diag_up[0] = h
diag_down = [1]*(len(x)-1)
diag_down[len(x)-2] = 0
diagonals = [diag, diag_up, diag_down]
A = diags(diagonals, [0, 1, -1]).toarray()
A = A/h**2
Defining step-like initial condition:
init_cond = [0]*(len(x))
for i in range(int(round(0.2*len(init_cond)))):
init_cond[i] = 1
Solving set of ODEs with a time frame defined.
t = np.linspace(0, 100, 200)
def pde(u, t, A, h):
dudt = A.dot(u) - f(u, h)
return dudt
sol = odeint(pde, init_cond, t, args=(A, h))
print(sol[len(sol)-1])
The code may be flawed as I'm rather new to Python. If there is anything I can improve, please let me know. The final result is here
Initial conditions did not change at all. Can anyone give me a hint where I could make a mistake? Thank you in advance!
I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files
As per advice given to me in this answer, I have implemented a Runge-Kutta integrator in my gravity simulator.
However, after I simulate one year of the solar system, the positions are still off by cca 110 000 kilometers, which isn't acceptable.
My initial data was provided by NASA's HORIZONS system. Through it, I obtained position and velocity vectors of the planets, Pluto, the Moon, Deimos and Phobos at a specific point in time.
These vectors were 3D, however, some people told me that I could ignore the third dimension as the planets aligned themselves in a plate around the Sun, and so I did. I merely copied the x-y coordinates into my files.
This is the code of my improved update method:
"""
Measurement units:
[time] = s
[distance] = m
[mass] = kg
[velocity] = ms^-1
[acceleration] = ms^-2
"""
class Uni:
def Fg(self, b1, b2):
"""Returns the gravitational force acting between two bodies as a Vector2."""
a = abs(b1.position.x - b2.position.x) #Distance on the x axis
b = abs(b1.position.y - b2.position.y) #Distance on the y axis
r = math.sqrt(a*a + b*b)
fg = (self.G * b1.m * b2.m) / pow(r, 2)
return Vector2(a/r * fg, b/r * fg)
#After this is ran, all bodies have the correct accelerations:
def updateAccel(self):
#For every combination of two bodies (b1 and b2) out of all bodies:
for b1, b2 in combinations(self.bodies.values(), 2):
fg = self.Fg(b1, b2) #Calculate the gravitational force between them
#Add this force to the current force vector of the body:
if b1.position.x > b2.position.x:
b1.force.x -= fg.x
b2.force.x += fg.x
else:
b1.force.x += fg.x
b2.force.x -= fg.x
if b1.position.y > b2.position.y:
b1.force.y -= fg.y
b2.force.y += fg.y
else:
b1.force.y += fg.y
b2.force.y -= fg.y
#For body (b) in all bodies (self.bodies.itervalues()):
for b in self.bodies.itervalues():
b.acceleration.x = b.force.x/b.m
b.acceleration.y = b.force.y/b.m
b.force.null() #Reset the force as it's not needed anymore.
def RK4(self, dt, stage):
#For body (b) in all bodies (self.bodies.itervalues()):
for b in self.bodies.itervalues():
rd = b.rk4data #rk4data is an object where the integrator stores its intermediate data
if stage == 1:
rd.px[0] = b.position.x
rd.py[0] = b.position.y
rd.vx[0] = b.velocity.x
rd.vy[0] = b.velocity.y
rd.ax[0] = b.acceleration.x
rd.ay[0] = b.acceleration.y
if stage == 2:
rd.px[1] = rd.px[0] + 0.5*rd.vx[0]*dt
rd.py[1] = rd.py[0] + 0.5*rd.vy[0]*dt
rd.vx[1] = rd.vx[0] + 0.5*rd.ax[0]*dt
rd.vy[1] = rd.vy[0] + 0.5*rd.ay[0]*dt
rd.ax[1] = b.acceleration.x
rd.ay[1] = b.acceleration.y
if stage == 3:
rd.px[2] = rd.px[0] + 0.5*rd.vx[1]*dt
rd.py[2] = rd.py[0] + 0.5*rd.vy[1]*dt
rd.vx[2] = rd.vx[0] + 0.5*rd.ax[1]*dt
rd.vy[2] = rd.vy[0] + 0.5*rd.ay[1]*dt
rd.ax[2] = b.acceleration.x
rd.ay[2] = b.acceleration.y
if stage == 4:
rd.px[3] = rd.px[0] + rd.vx[2]*dt
rd.py[3] = rd.py[0] + rd.vy[2]*dt
rd.vx[3] = rd.vx[0] + rd.ax[2]*dt
rd.vy[3] = rd.vy[0] + rd.ay[2]*dt
rd.ax[3] = b.acceleration.x
rd.ay[3] = b.acceleration.y
b.position.x = rd.px[stage-1]
b.position.y = rd.py[stage-1]
def update (self, dt):
"""Pushes the uni 'dt' seconds forward in time."""
#Repeat four times:
for i in range(1, 5, 1):
self.updateAccel() #Calculate the current acceleration of all bodies
self.RK4(dt, i) #ith Runge-Kutta step
#Set the results of the Runge-Kutta algorithm to the bodies:
for b in self.bodies.itervalues():
rd = b.rk4data
b.position.x = b.rk4data.px[0] + (dt/6.0)*(rd.vx[0] + 2*rd.vx[1] + 2*rd.vx[2] + rd.vx[3]) #original_x + delta_x
b.position.y = b.rk4data.py[0] + (dt/6.0)*(rd.vy[0] + 2*rd.vy[1] + 2*rd.vy[2] + rd.vy[3])
b.velocity.x = b.rk4data.vx[0] + (dt/6.0)*(rd.ax[0] + 2*rd.ax[1] + 2*rd.ax[2] + rd.ax[3])
b.velocity.y = b.rk4data.vy[0] + (dt/6.0)*(rd.ay[0] + 2*rd.ay[1] + 2*rd.ay[2] + rd.ay[3])
self.time += dt #Internal time variable
The algorithm is as follows:
Update the accelerations of all bodies in the system
RK4(first step)
goto 1
RK4(second)
goto 1
RK4(third)
goto 1
RK4(fourth)
Did I mess something up with my RK4 implementation? Or did I just start with corrupted data (too few important bodies and ignoring the 3rd dimension)?
How can this be fixed?
Explanation of my data etc...
All of my coordinates are relative to the Sun (i.e. the Sun is at (0, 0)).
./my_simulator 1yr
Earth position: (-1.47589927462e+11, 18668756050.4)
HORIZONS (NASA):
Earth position: (-1.474760457316177E+11, 1.900200786726017E+10)
I got the 110 000 km error by subtracting the Earth's x coordinate given by NASA from the one predicted by my simulator.
relative error = (my_x_coordinate - nasa_x_coordinate) / nasa_x_coordinate * 100
= (-1.47589927462e+11 + 1.474760457316177E+11) / -1.474760457316177E+11 * 100
= 0.077%
The relative error seems miniscule, but that's simply because Earth is really far away from the Sun both in my simulation and in NASA's. The distance is still huge and renders my simulator useless.
110 000 km absolute error means what relative error?
I got the 110 000 km value by subtracting my predicted Earth's x
coordinate with NASA's Earth x coordinate.
I'm not sure what you're calculating here or what you mean by "NASA's Earth x coordinate". That's a distance from what origin, in what coordinate system, at what time? (As far as I know, the earth moves in orbit around the sun, so its x-coordinate w.r.t. a coordinate system centered at the sun is changing all the time.)
In any case, you calculated an absolute error of 110,000 km by subtracting your calculated value from "NASA's Earth x coordinate". You seem to think this is a bad answer. What's your expectation? To hit it spot on? To be within a meter? One km? What's acceptable to you and why?
You get a relative error by dividing your error difference by "NASA's Earth x coordinate". Think of it as a percentage. What value do you get? If it's 1% or less, congratulate yourself. That would be quite good.
You should know that floating point numbers aren't exact on computers. (You can't represent 0.1 exactly in binary any more than you can represent 1/3 exactly in decimal.) There are going to be errors. Your job as a simulator is to understand the errors and minimize them as best you can.
You could have a stepsize problem. Try reducing your time step size by half and see if you do better. If you do, it says that your results have not converged. Reduce by half again until you achieve acceptable error.
Your equations might be poorly conditioned. Small initial errors will be amplified over time if that's true.
I'd suggest that you non-dimensionalize your equations and calculate the stability limit step size. Your intuition about what a "small enough" step size should be might surprise you.
I'd also read a bit more about the many body problem. It's subtle.
You might also try a numerical integration library instead of your integration scheme. You'll program your equations and give them to an industrial strength integrator. It could give some insight into whether or not it's your implementation or the physics that causes the problem.
Personally, I don't like your implementation. It'd be a better solution if you'd done it with mathematical vectors in mind. The "if" test for the relative positions leaves me cold. Vector mechanics would make the signs come out naturally.
UPDATE:
OK, your relative errors are pretty small.
Of course the absolute error does matter - depending on your requirements. If you're landing a vehicle on a planet you don't want to be off by that much.
So you need to stop making assumptions about what constitutes too small a step size and do what you must to drive the errors to an acceptable level.
Are all the quantities in your calculation 64-bit IEEE floating point numbers? If not, you'll never get there.
A 64 bit floating point number has about 16 digits of accuracy. If you need more than that, you'll have to use an infinite precision object like Java's BigDecimal or - wait for it - rescale your equations to use a length unit other than kilometers. If you scale all your distances by something meaningful for your problem (e.g., the diameter of the earth or the length of the major/minor axis of the earth's orbit) you might do better.
To do a RK4 integration of the solar system you need a very good precision or your solution will diverge quickly. Assuming you have implemented everything correctly, you may be seeing the drawbacks with RK for this sort of simulation.
To verify if this is the case: try a different integration algorithm. I found that using Verlet integration a solar system simulation will be much less chaotic. Verlet is much simpler to implement than RK4 as well.
The reason Verlet (and derived methods) are often better than RK4 for long term prediction (like full orbits) is that they are symplectic, that is, conserve momentum which RK4 does not. Thus Verlet will give a better behavior even after it diverges (a realistic simulation but with an error in it) whereas RK will give a non-physical behavior once it diverges.
Also: make sure you are using floating point as good as you can. Don't use distances in meters in the solar system, since the precision of floating point numbers is much better in the 0..1 interval. Using AU or some other normalized scale is much better than using meters. As suggested on the other topic, ensure you use an epoch for the time to avoid accumulating errors when adding time steps.
Such simulations are notoriously unreliable. Rounding errors accumulate and introduce instability. Increasing precision doesn't help much; the problem is that you (have to) use a finite step size and nature uses a zero step size.
You can reduce the problem by reducing the step size, so it takes longer for the errors to become apparent. If you are not doing this in real time, you can use a dynamic step size which reduces if two or more bodies are very close.
One thing I do with these kinds of simulations is "re-normalise" after each step to make the total energy the same. The sum of gravitational plus kinetic energy for the system as a whole should be a constant (conservation of energy). Work out the total energy after each step, and then scale all the object speeds by a constant amount to keep the total energy a constant. This at least keeps the output looking more plausible. Without this scaling, a tiny amount of energy is either added to or removed from the system after each step, and orbits tend to blow up to infinity or spiral into the sun.
Very simple changes that will improve things (proper usage of floating point values)
Change the unit system, to use as much mantissa bits as possible. Using meters, you're doing it wrong... Use AU, as suggested above. Even better, scale things so that the solar system fits in a 1x1x1 box
Already said in an other post : your time, compute it as time = epoch_count * time_step, not by adding ! This way, you avoid accumulating errors.
When doing a summation of several values, use a high precision sum algorithm, like Kahan summmation. In python, math.fsum does it for you.
Shouldn't the force decomposition be
th = atan(b, a);
return Vector2(cos(th) * fg, sin(th) * fg)
(see http://www.physicsclassroom.com/class/vectors/Lesson-3/Resolution-of-Forces or https://fiftyexamples.readthedocs.org/en/latest/gravity.html#solution)
BTW: you take the square root to calculate the distance, but you actually need the squared distance...
Why not simplify
r = math.sqrt(a*a + b*b)
fg = (self.G * b1.m * b2.m) / pow(r, 2)
to
r2 = a*a + b*b
fg = (self.G * b1.m * b2.m) / r2
I am not sure about python, but in some cases you get more precise calculations for intermediate results (Intel CPUs support 80 bit floats, but when assigning to variables, they get truncated to 64 bit):
Floating point computation changes if stored in intermediate "double" variable
It is not quite clear in what frame of reference you have your planet coordinates and velocities. If it is in heliocentric frame (the frame of reference is tied to the Sun), then you have two options: (i) perform calculations in non-inertial frame of reference (the Sun is not motionless), (ii) convert positions and velocities into the inertial barycentric frame of reference. If your coordinates and velocities are in barycentric frame of reference, you must have coordinates and velocities for the Sun as well.
How can I find the point where the first derivative of my equation equals 0 using scipy.integrate.ode?
I set up this function, which gets the answer, but I'm not sure about accuracy and it can't be the most efficient way to do this.
Basically I am using this function to find the time a projectile with initial velocity stops moving. With systems of ODEs, is there a better way to solve for this answer?
import numpy as np
from scipy import integrate
def find_nearest(array,value):
idx=(np.abs(array-value)).argmin()
return array[idx], idx
def deriv(x,t):
# This function sets up the following relations
# dx/dt = v , dv/dt = -(Cp/m)*(4+v^2)
return np.array([ x[1], -(0.005/0.1) * (4+ (x[1]**2)) ])
def findzero(start, stop, v0):
time = np.linspace(start, stop, 100000)
#xinit are initial vaules of equation
xinit = np.array([0.0,v0])
x = integrate.odeint(deriv,xinit,time)
# find nearest velocity value nearest to 0
value, num = find_nearest(x[:,1],0.0001)
print 'closest value ',
print value
print 'reaches zero at time ',
print time[num]
return time[num]
# from 0 to 20 seconds with initial velocity of 100 m/s
b = findzero(0.0,20.0,100.0)
In general, a good approach to solve this sort of problem is to rewrite your equations so that velocity is the independent variable and time and distance are the dependent variables. Then, you simply have to integrate the equations from v=v0 to v=0.
However, in the example you give it is not even necessary to resort to scipy.integrate at all. The equations can be easily solved with pencil and paper (separation of variables followed by a standard integral). The result is
t = (m/(2 Cp)) arctan(v0/2)
where v0 is the initial velocity and the result of arctan must be taken in radians.
For an initial velocity of 100 m/s, the answer is 15.5079899282 seconds.
I would use something like scipy.optimize.fsolve() to find the roots of the derivative. Using this, one can work backwards to find the time taken to reach a root.