I don't really know where to start with this problem, as I haven't had much experience with this but it is required to solve this part of the project using a computer.
I have a 2nd order ODE which is:
m = 1220
k = 35600
g = 17.5
a = 450000
and b is between 1000 and 10000 with increments of 500.
x(0)= 0
x'(0)= 5
m*x''(t) + b*x'(t) + k*x(t)+a*(x(t))^3 = -m*g
I need to find the smallest b such that the solution is never positive. I know what the graph should look like, but I just don't know how to use odeint to get a solution to the differential equation.
This is the code I have so far:
from numpy import *
from matplotlib.pylab import *
from scipy.integrate import odeint
m = 1220.0
k = 35600.0
g = 17.5
a = 450000.0
x0= [0.0,5.0]
b = 1000
tmax = 10
dt = 0.01
def fun(x, t):
return (b*x[1]-k*x[0]-a*(x[0]**3)-m*g)*(1.0/m)
t_rk = arange(0,tmax,dt)
sol = odeint(fun, x0, t_rk)
plot(t_rk,sol)
show()
Which doesn't really produce much of anything.
Any thoughts? Thanks
To solve a second-order ODE using scipy.integrate.odeint, you should write it as a system of first-order ODEs:
I'll define z = [x', x], then z' = [x'', x'], and that's your system! Of course, you have to plug in your real relations:
x'' = -(b*x'(t) + k*x(t) + a*(x(t))^3 + m*g) / m
becomes:
z[0]' = -1/m * (b*z[0] + k*z[1] + a*z[1]**3 + m*g)
z[1]' = z[0]
Or, just call it d(z):
def d(z, t):
return np.array((
-1/m * (b*z[0] + k*z[1] + a*z[1]**3 + m*g), # this is z[0]'
z[0] # this is z[1]'
))
Now you can feed it to the odeint as such:
_, x = odeint(d, x0, t).T
(The _ is a blank placeholder for the x' variable we made)
In order to minimize b subject to the constraint that the maximum of x is always negative, you can use scipy.optimize.minimize. I'll implement it by actually maximizing the maximum of x, subject to the constraint that it remains negative, because I can't think of how to minimize a parameter without being able to invert the function.
from scipy.optimize import minimize
from scipy.integrate import odeint
m = 1220
k = 35600
g = 17.5
a = 450000
z0 = np.array([-.5, 0])
def d(z, t, m, k, g, a, b):
return np.array([-1/m * (b*z[0] + k*z[1] + a*z[1]**3 + m*g), z[0]])
def func(b, z0, *args):
_, x = odeint(d, z0, t, args=args+(b,)).T
return -x.max() # minimize negative max
cons = [{'type': 'ineq', 'fun': lambda b: b - 1000, 'jac': lambda b: 1}, # b > 1000
{'type': 'ineq', 'fun': lambda b: 10000 - b, 'jac': lambda b: -1}, # b < 10000
{'type': 'ineq', 'fun': lambda b: func(b, z0, m, k, g, a)}] # func(b) > 0 means x < 0
b0 = 10000
b_min = minimize(func, b0, args=(z0, m, k, g, a), constraints=cons)
I don't think you can solve your problem as stated: your initial conditions, with x = 0 and x' > 0 imply that the solution will be positive for some values very close to the starting point. So there is no b for which the solution is never positive...
Leaving that aside, to solve a second order differential equation, you first need to rewrite it as a system of two first order differential equations. Defining y = x' we can rewrite your single equation as:
x' = y
y' = -b/m*y - k/m*x - a/m*x**3 - g
x[0] = 0, y[0] = 5
So your function should look something like this:
def fun(z, t, m, k, g, a, b):
x, y = z
return np.array([y, -(b*y + (k + a*x*x)*x) / m - g])
And you can solve and plot your equations doing:
m, k, g, a = 1220, 35600, 17.5, 450000
tmax, dt = 10, 0.01
t = np.linspace(0, tmax, num=np.round(tmax/dt)+1)
for b in xrange(1000, 10500, 500):
print 'Solving for b = {}'.format(b)
sol = odeint(fun, [0, 5], t, args=(m, k, g, a, b))[..., 0]
plt.plot(t, sol, label='b = {}'.format(b))
plt.legend()
Related
I have a few lines of code which doesn't converge. If anyone has an idea why, I would greatly appreciate. The original equation is written in def f(x,y,b,m) and I need to find parameters b,m.
np.random.seed(42)
x = np.random.normal(0, 5, 100)
y = 50 + 2 * x + np.random.normal(0, 2, len(x))
def f(x, y, b, m):
return (1/len(x))*np.sum((y - (b + m*x))**2) # it is supposed to be a sum operator
def dfb(x, y, b, m): # partial derivative with respect to b
return b - m*np.mean(x)+np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
return np.sum(x*y - b*x - m*x**2)
b0 = np.mean(y)
m0 = 0
alpha = 0.0001
beta = 0.0001
epsilon = 0.01
while True:
b = b0 - alpha * dfb(x, y, b0, m0)
m = m0 - alpha * dfm(x, y, b0, m0)
if np.sum(np.abs(m-m0)) <= epsilon and np.sum(np.abs(b-b0)) <= epsilon:
break
else:
m0 = m
b0 = b
print(m, f(x, y, b, m))
Both derivatives got some signs mixed up:
def dfb(x, y, b, m): # partial derivative with respect to b
# return b - m*np.mean(x)+np.mean(y)
# ^-------------^------ these are incorrect
return b + m*np.mean(x) - np.mean(y)
def dfm(x, y, b, m): # partial derivative with respect to m
# v------ this should be negative
return -np.sum(x*y - b*x - m*x**2)
In fact, these derivatives are still missing some constants:
dfb should be multiplied by 2
dfm should be multiplied by 2/len(x)
I imagine that's not too bad because the gradient is scaled by alpha anyway, but it could make the speed of convergence worse.
If you do use the correct derivatives, your code will converge after one iteration:
def dfb(x, y, b, m): # partial derivative with respect to b
return 2 * (b + m * np.mean(x) - np.mean(y))
def dfm(x, y, b, m): # partial derivative with respect to m
# Used `mean` here since (2/len(x)) * np.sum(...)
# is the same as 2 * np.mean(...)
return -2 * np.mean(x * y - b * x - m * x**2)
Let's say I have a 3d object on a grid V = V(a, b, c). I want to interpolate V(a, b + alpha*d, c+d).
In other words, define f(d) = V(a, b + alpha*d, c+d). I want to approximate f. Importantly, I want to then apply optimize.root to the approximation, so I appreciate efficient computation of f.
For example,
gamma = 0.5
aGrid = np.linspace(5, 10, 30)
bGrid = np.linspace(4, 7, 40)
cGrid = np.linspace(0.1, 0.5, 20)
A, B, C = np.meshgrid(aGrid, bGrid, cGrid, indexing='ij')
V = A**2 + B*C
# define initial a, b, c
idx = (7, 8, 9)
a, b, c = A[idx], B[idx], C[idx]
# so V(a, b, c) = V[idx]
A naive approach would be
g = scipy.interpolate.interp2d(bGrid, cGrid, V[7, ...])
f = lambda x: g(b + gamma*x, c + x)
and my ultimate goal:
constant = 10
err = lambda x: f(x) - constant
scipy.optimize.root(err, np.array([5]))
However, this all looks very messy and inefficient. Is there a more pythonic way of accomplishing this?
I have change the notations to help me in the understanding of the question (I am used to physics notations). There is a scalar field V(x, y, z) in the 3D space.
We define a parametric line in this 3D space:
f_{x0, y0, z0, v_x, v_y, v_z}(t) = (x0 + v_x*t, y0 + v_y*t, z0 + v_z*t)
It could be seen as the trajectory of a particule starting at the point (x0, y0, z0) and moving along a straight line with the velocity vector (v_x, v_y, v_z).
We are looking for the time t1 such that V( f(t1) ) is equal to a specific given value V0. Is this the asked question ?
import numpy as np
from scipy.interpolate import RegularGridInterpolator
from scipy.optimize import root_scalar
import matplotlib.pylab as plt
# build the field
aGrid = np.linspace(5, 10, 30)
bGrid = np.linspace(4, 7, 40)
cGrid = np.linspace(0.1, 0.5, 20)
A, B, C = np.meshgrid(aGrid, bGrid, cGrid, indexing='ij')
V = A**2 + B*C
# Build a continuous field by linear interpolation of the gridded data:
V_interpolated = RegularGridInterpolator((aGrid, bGrid, cGrid), V,
bounds_error=False, fill_value=None)
# define the parametric line
idx = (7, 8, 9)
x0, y0, z0 = A[idx], B[idx], C[idx]
alpha = 0.5
v_x, v_y, v_z = 0, alpha, 1
def line(t):
xyz = (x0 + v_x*t, y0 + v_y*t, z0 + v_z*t)
return xyz
# Plot V(x,y,z) along this line (to check, is there a unique solution?)
t_span = np.linspace(0, 10, 23)
V_along_the_line = V_interpolated( line(t_span) )
plt.plot(t_span, V_along_the_line);
plt.xlabel('t'); plt.ylabel('V');
# Find t such that V( f(t) ) == V1
V1 = 80
sol = root_scalar(lambda s: V_interpolated( line(s) ) - V1,
x0=.0, x1=10)
print(sol)
# converged: True
# flag: 'converged'
# function_calls: 8
# iterations: 7
# root: 5.385594973846983
# Get the coordinates of the solution point:
print("(x,y,z)_sol = ", line(sol.root))
# (6.206896551724138, 7.308182102308106, 5.675068658057509)
I have checked python non linear ODE with 2 variables , which is not my case. Maybe my case is not called as nonlinear ODE, correct me please.
The question isFrenet Frame actually, in which there are 3 vectors T(s), N(s) and B(s); the parameter s>=0. And there are 2 scalar with known math formula expression t(s) and k(s). I have the initial value T(0), N(0) and B(0).
diff(T(s), s) = k(s)*N(s)
diff(N(s), s) = -k(s)*T(s) + t(s)*B(s)
diff(B(s), s) = -t(s)*N(s)
Then how can I get T(s), N(s) and B(s) numerically or symbolically?
I have checked scipy.integrate.ode but I don't know how to pass k(s)*N(s) into its first parameter at all
def model (z, tspan):
T = z[0]
N = z[1]
B = z[2]
dTds = k(s) * N # how to express function k(s)?
dNds = -k(s) * T + t(s) * B
dBds = -t(s)* N
return [dTds, dNds, dBds]
z = scipy.integrate.ode(model, [T0, N0, B0]
Here is a code using solve_ivp interface from Scipy (instead of odeint) to obtain a numerical solution:
import numpy as np
from scipy.integrate import solve_ivp
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
# The equations: dz/dt = model(s, z):
def model(s, z):
T = z[:3] # z is a (9, ) shaped array, the concatenation of T, N and B
N = z[3:6]
B = z[6:]
dTds = k(s) * N
dNds = -k(s) * T + t(s) * B
dBds = -t(s)* N
return np.hstack([dTds, dNds, dBds])
T0, N0, B0 = [1, 0, 0], [0, 1, 0], [0, 0, 1]
z0 = np.hstack([T0, N0, B0])
s_span = (0, 6) # start and final "time"
t_eval = np.linspace(*s_span, 100) # define the number of point wanted in-between,
# It is not necessary as the solver automatically
# define the number of points.
# It is used here to obtain a relatively correct
# integration of the coordinates, see the graph
# Solve:
sol = solve_ivp(model, s_span, z0, t_eval=t_eval, method='RK45')
print(sol.message)
# >> The solver successfully reached the end of the integration interval.
# Unpack the solution:
T, N, B = np.split(sol.y, 3) # another way to unpack the z array
s = sol.t
# Bonus: integration of the normal vector in order to get the coordinates
# to plot the curve (there is certainly better way to do this)
coords = cumtrapz(T, x=s)
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
T, N and B are vectors. Therefore, there are 9 equations to solve: z is a (9,) array.
For constant curvature and no torsion, the result is a circle:
thanks for your example. And I thought it again, found that since there is formula for dZ where Z is matrix(T, N, B), we can calculate Z[i] = Z[i-1] + dZ[i-1]*deltaS according to the concept of derivative. Then I code and find this idea can solve the circle example. So
is Z[i] = Z[i-1] + dZ[i-1]*deltaS suitable for other ODE? will it fail in some situation, or does scipy.integrate.solve_ivp/scipy.integrate.ode supply advantage over the direct usage of Z[i] = Z[i-1] + dZ[i-1]*deltaS?
in my code, I have to normalize Z[i] because ||Z[i]|| is not always 1. Why does it happen? A float numerical calculation error?
my answer to my question, at least it works for the circle
import numpy as np
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
def dZ(s, Z):
return np.array(
[k(s) * Z[1], -k(s) * Z[0] + t(s) * Z[2], -t(s)* Z[1]]
)
T0, N0, B0 = np.array([1, 0, 0]), np.array([0, 1, 0]), np.array([0, 0, 1])
deltaS = 0.1 # step to calculate dZ/ds
num = int(2*np.pi*1/deltaS) + 1 # how many points on the curve we have to calculate
T = np.zeros([num, ], dtype=object)
N = np.zeros([num, ], dtype=object)
B = np.zeros([num, ], dtype=object)
T[0] = T0
N[0] = N0
B[0] = B0
for i in range(num-1):
temp_dZ = dZ(i*deltaS, np.array([T[i], N[i], B[i]]))
T[i+1] = T[i] + temp_dZ[0]*deltaS
T[i+1] = T[i+1]/np.linalg.norm(T[i+1]) # have to do this
N[i+1] = N[i] + temp_dZ[1]*deltaS
N[i+1] = N[i+1]/np.linalg.norm(N[i+1])
B[i+1] = B[i] + temp_dZ[2]*deltaS
B[i+1] = B[i+1]/np.linalg.norm(B[i+1])
coords = cumtrapz(
[
[i[0] for i in T], [i[1] for i in T], [i[2] for i in T]
]
, x=np.arange(num)*deltaS
)
plt.figure()
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
plt.show()
I found that the equation I listed in the first post does not work for my curve. So I read Gray A., Abbena E., Salamon S-Modern Differential Geometry of Curves and Surfaces with Mathematica. 2006 and found that for arbitrary curve, Frenet equation should be written as
diff(T(s), s) = ||r'||* k(s)*N(s)
diff(N(s), s) = ||r'||*(-k(s)*T(s) + t(s)*B(s))
diff(B(s), s) = ||r'||* -t(s)*N(s)
where ||r'||(or ||r'(s)||) is diff([x(s), y(s), z(s)], s).norm()
now the problem has changed to be some different from that in the first post, because there is no r'(s) function or discrete data array. So I think this is suitable for a new reply other than comment.
I met 2 questions while trying to solve the new equation:
how can we program with r'(s) if scipy's solve_ivp is used?
I try to modify my gaussian solution, but the result is totally wrong.
thanks again
import numpy as np
from scipy.integrate import cumtrapz
import matplotlib.pylab as plt
# Define the parameters as regular Python function:
def k(s):
return 1
def t(s):
return 0
def dZ(s, Z, r_norm):
return np.array([
r_norm * k(s) * Z[1],
r_norm*(-k(s) * Z[0] + t(s) * Z[2]),
r_norm*(-t(s)* Z[1])
])
T0, N0, B0 = np.array([1, 0, 0]), np.array([0, 1, 0]), np.array([0, 0, 1])
deltaS = 0.1 # step to calculate dZ/ds
num = int(2*np.pi*1/deltaS) + 1 # how many points on the curve we have to calculate
T = np.zeros([num, ], dtype=object)
N = np.zeros([num, ], dtype=object)
B = np.zeros([num, ], dtype=object)
R0 = N0
T[0] = T0
N[0] = N0
B[0] = B0
for i in range(num-1):
r_norm = np.linalg.norm(R0)
temp_dZ = dZ(i*deltaS, np.array([T[i], N[i], B[i]]), r_norm)
T[i+1] = T[i] + temp_dZ[0]*deltaS
T[i+1] = T[i+1]/np.linalg.norm(T[i+1])
N[i+1] = N[i] + temp_dZ[1]*deltaS
N[i+1] = N[i+1]/np.linalg.norm(N[i+1])
B[i+1] = B[i] + temp_dZ[2]*deltaS
B[i+1] = B[i+1]/np.linalg.norm(B[i+1])
R0 = R0 + T[i]*deltaS
coords = cumtrapz(
[
[i[0] for i in T], [i[1] for i in T], [i[2] for i in T]
]
, x=np.arange(num)*deltaS
)
plt.figure()
plt.plot(coords[0, :], coords[1, :]);
plt.axis('equal'); plt.xlabel('x'); plt.xlabel('y');
plt.show()
I'm trying to solve a system of transcendental equations using sympy:
from sympy import *
A, Z, phi, d, a, tau, t, R, u, v = symbols('A Z phi d a tau t R u v', real=True)
system = [
Eq(A, sqrt(R**2*cos(u)**2 + 2*R*d*cos(a)*cos(u) + d**2 + 2*d*(v*sin(tau) - (R*sin(u) + t)*cos(tau))*sin(a) + (v*sin(tau) - (R*sin(u) + t)*cos(tau))**2)),
Eq(Z, v*cos(tau) + (R*sin(u) + t)*sin(tau)),
Eq(phi, (R*sin(a)*cos(u) - (v*sin(tau) - (R*sin(u) + t)*cos(tau))*cos(a))/A),
]
or in euclidian coordinates:
system = [
Eq(A, sqrt(d**2 + 2*d*x*cos(a) + 2*d*(z*sin(tau) - (t + y)*cos(tau))*sin(a) + x**2 + (z*sin(tau) - (t + y)*cos(tau))**2)),
Eq(Z, z*cos(tau) + (t + y)*sin(tau)),
Eq(phi, (x*sin(a) - (z*sin(tau) - (t + y)*cos(tau))*cos(a))/A),
Eq(R**2, x**2+y**2),
]
Essentially this is an intersection of a circle (radius A) with a skewed (angle τ) cylinder (radius R) rotating (angle a) around an eccentric axis (d). I'm interested in the function φ(A, Z) as I need to evaluate it for several hundred combinations of A and Z for a few combinations of the other parameters (R, A, d, t, τ).
Here is what I tried:
Directly using solve(system, [u, v, phi]): Needs too much time to complete.
Reducing it to one equation using first and then using solveset().
Inserting numbers for all parameters (example: R, A, d, t, tao = 25, 150, 125, 2, 5/180*pi) then using any combination of the N() and solve() or solveset().
I was able to get solutions for the simplified problem of the unskewed cylinder (t, tao = 0, 0) with d = sqrt(A**2 - R**2).
How can I solve this system of equations? Failing that: How can I get numeric values for φ for a few dozen combinations of the parameters and a few hundred points (A, Z) each?
If you are interested. Here is the code leading up to the equation:
A, Z, phi, d, a, tau, t, R, u, v = symbols('A Z phi d a tau t R u v', real=True)
r10, z10, phi10 = symbols('r10 z10 phi_10', real=True)
system10 = [Eq(A, r10), Eq(Z, z10), Eq(phi, phi10)]
x9, x8, y8, z8 = symbols('x9 x8 y8 z8', real=True)
system8 = [e.subs(r10, sqrt(x9**2+y8**2)).subs(z10, z8).subs(phi10, y8/A).subs(x9, x8+d) for e in system10]
r6, phi7, phi6, z6 = symbols('r6 phi_7 phi_6 z6', real=True)
system6 = [e.subs(x8, r6*cos(phi7)).subs(y8, r6*sin(phi7)).subs(z8, z6).subs(phi7, phi6+a) for e in system8]
x6, y6 = symbols('x_6 y_6', real=True)
system6xy = [simplify(expand_trig(e).subs(cos(phi6), x6/r6).subs(sin(phi6), y6/r6)) for e in system6]
# solve([simplify(e.subs(phi6, u).subs(r6, R).subs(z6, v).subs(d, sqrt(A**2-R**2))) for e in system6], [u, v, phi]) -> phi = acos(+-R/A)
r5, phi5, x5 = symbols('r_5 phi_5 x_5', real=True)
system5 = [e.subs(x6, x5).subs(y6, r5*cos(phi5)).subs(z6, r5*sin(phi5)) for e in system6xy]
r4, phi4, x4 = symbols('r_4 phi_4 x_4', real=True)
system4 = [e.subs(phi5, phi4 + tau).subs(r5, r4).subs(x5, x4) for e in system5]
x3, y3, z3 = symbols('x_3 y_3 z_3', real=True)
system3 = [expand_trig(e).subs(cos(phi4), y3/r4).subs(sin(phi4), z3/r4).subs(r4, sqrt(y3**2+z3**2)).subs(x4, x3) for e in system4]
x2, y2, z2 = symbols('x_2 y_2 z_2', real=True)
system2 = [e.subs(x3, x2).subs(y3, y2+t).subs(z3, z2) for e in system3]
system = [simplify(e.subs(x2, R*cos(u)).subs(y2, R*sin(u)).subs(z2, v)) for e in system2]
# solve([simplify(e.subs(tau, 0).subs(t, 0).subs(d, sqrt(A**2-R**2))) for e in system], [u, v, phi]) -> phi = acos(+-R/A)
Your first two equations must be solved for u and v; the 3rd then gives the corresponding value of phi. You can easily solve the 2nd for v.
Approach:
Solve the 2nd equation for v and substitute that into the 1st to obtain a function of only u and other variables, the values of which you will define (solving this high order symbolic expression will not be effective). You can't easily solve that equation for u, but you can get a reasonable solution for sin(u) in terms of cos(u) to give two roots: sin(u) = f1(cos(u)) and f2(cos(u)). I get
(Z*sin(tau) + d*sin(a)*cos(tau) - t - sqrt(A**2 - R**2*cos(u)**2 -
2*R*d*cos(a)*cos(u) - d**2*cos(a)**2)*cos(tau))/R
and
(Z*sin(tau) + d*sin(a)*cos(tau) - t + sqrt(A**2 - R**2*cos(u)**2 -
2*R*d*cos(a)*cos(u) - d**2*cos(a)**2)*cos(tau))/R
If we let x = cos(u) we know that f1(x)**2 + x**2 - 1 = 0 (which we call z1(x) and similarly z2(x) for the other root) AND we know that x is constrained to the range of [-1, 1]. So we can bisect zi(x) for x in the range of [-1, 1] to find the value of x; u = asin(x) can be substituted into the second equation to find v; finally, u and v can be substituted into the 3rd to find phi.
e.g. for {tau: 3, R: 7, A: 7, t: 4, a: 3, Z: 3, d: 2}
I get x = sin(u) = {-.704, 0.9886} which gives v = {-1.47, -3.16} and phi = {1.52, -.093}
Of course, those values may be meaningless if the input is meaningless, but with the right values hopefully this approach might prove worthwhile.
/c
I tried to write an algorithm for solving the non-linear ODE
dr/dt = r(t)+r²(t)
which has (one possible) solution
r(t) = r²(t)/2+r³(t)/3
For that I implemented both the euler method and the RK4 method in python. For error checking I used the example on rosettacode:
dT/dt = -k(T(t)-T0)
with the solution
T0 + (Ts − T0)*exp(−kt)
Thus, my code now looks like
import numpy as np
from matplotlib import pyplot as plt
def test_func_1(t, x):
return x*x
def test_func_1_sol(t, x):
return x*x*x/3.0
def test_func_2_sol(TR, T0, k, t):
return TR + (T0-TR)*np.exp(-0.07*t)
def rk4(func, dh, x0, t0):
k1 = dh*func(t0, x0)
k2 = dh*func(t0+dh*0.5, x0+0.5*k1)
k3 = dh*func(t0+dh*0.5, x0+0.5*k2)
k4 = dh*func(t0+dh, x0+k3)
return x0+k1/6.0+k2/3.0+k3/3.0+k4/6.0
def euler(func, x0, t0, dh):
return x0 + dh*func(t0, x0)
def rho_test(t0, rho0):
return rho0 + rho0*rho0
def rho_sol(t0, rho0):
return rho0*rho0*rho0/3.0+rho0*rho0/2.0
def euler2(f,y0,a,b,h):
t,y = a,y0
while t <= b:
#print "%6.3f %6.5f" % (t,y)
t += h
y += h * f(t,y)
def newtoncooling(time, temp):
return -0.07 * (temp - 20)
x0 = 100
x_vec_rk = []
x_vec_euler = []
x_vec_rk.append(x0)
h = 1e-3
for i in range(100000):
x0 = rk4(newtoncooling, h, x0, i*h)
x_vec_rk.append(x0)
x0 = 100
x_vec_euler.append(x0)
x_vec_sol = []
x_vec_sol.append(x0)
for i in range(100000):
x0 = euler(newtoncooling, x0, 0, h)
#print(i, x0)
x_vec_euler.append(x0)
x_vec_sol.append(test_func_2_sol(20, 100, 0, i*h))
euler2(newtoncooling, 0, 0, 1, 1e-4)
x_vec = np.linspace(0, 1, len(x_vec_euler))
plt.plot(x_vec, x_vec_euler, x_vec, x_vec_sol, x_vec, x_vec_rk)
plt.show()
#rho-function
x0 = 1
x_vec_rk = []
x_vec_euler = []
x_vec_rk.append(x0)
h = 1e-3
num_steps = 650
for i in range(num_steps):
x0 = rk4(rho_test, h, x0, i*h)
print "%6.3f %6.5f" % (i*h, x0)
x_vec_rk.append(x0)
x0 = 1
x_vec_euler.append(x0)
x_vec_sol = []
x_vec_sol.append(x0)
for i in range(num_steps):
x0 = euler(rho_test, x0, 0, h)
print "%6.3f %6.5f" % (i*h, x0)
x_vec_euler.append(x0)
x_vec_sol.append(rho_sol(i*h, i*h+x0))
x_vec = np.linspace(0, num_steps*h, len(x_vec_euler))
plt.plot(x_vec, x_vec_euler, x_vec, x_vec_sol, x_vec, x_vec_rk)
plt.show()
It works fine for the example from rosettacode, but it is unstable and explodes (for t > 0.65, both for RK4 and euler) for my formula. Is therefore my implementation incorrect, or is there another error I do not see?
Searching for the exact solution of your equation:
dr/dt = r(t)+r²(t)
I've found:
r(t) = exp(C+t)/(1-exp(C+t))
Where C is an arbitrary constant which depends on initial conditions. It can be seen that for t -> -C the r(t) -> infinity.
I do not know what initial condition you use, but may be you meet with this singularity when calculate numerical solution.
UPDATE:
Since your initial condition is r(0)=1 the constant C is C = ln(1/2) ~ -0.693. It can explain why your numerical solution crashes at some t>0.65
UPDATE:
To verify your code you can simply compare your numerical solution calculated for, say 0<=t<0.6 with exact solution.