Taylor series for pendulum in python - python

I have a question from school to simulate pendulum motion based on taylor series expansion.
Angular frequency d 2 θ d t 2 = − m G R I sin ⁡ ( θ )
I am quite new to python. I know now how to simulate using Euler's method.
for n in range(N_t):
u[n+1] = u[n] + dt*v[n]
v[n+1] = v[n] + dt*(m*g*r/I)*sin(u[n])
How can I simulate it using taylor expansion?
Should I run it the code only as below?
f′′(x0) = 2a2

In the equation u''=f(u) you get higher order derivatives by deriving this equation by applying chain and product rule and substituting back the ODE for all second derivatives of u. The values of u and u' are taken from the current state vector.
u''' = f'(u)u'
u^{(4)} = f''(u)*u'^2 + f'(u)*u''
= f''(u)*u'^2 + f'(u)*f(u)
u^{(5)} = f'''(u)*u'^3 + 3f''(u)*u'*f(u) + f'(u)^2*u'
There is also a systematic way using Taylor series arithmetics of automatic/algorithmic differentiation.
(2022/06/29) To solve x''=-sin(x) -- the constant factors become trivial by rescaling the time -- via Taylor series anywhere, reformulate this as system
x'=y
y'=-v
u=cos(x) ==> u' = -v*y
v=sin(x) ==> v' = u*y
the last two via the trigonometric derivatives and the chain rule. Comparing coefficients left and right results in coupled incremental formulas for the coefficients of the Taylor series of all variables, with x(t0)=x0, y(t0)=y0, u(t0)=cos(x0), v(t0)=sin(x0).

I assume you meant this from your code,
I also assume that u := θ and v := θ'.
So, the Taylor expansion of sin(x) is
and your equation is now
So, you can calculate u and v from the above equation. Or, I don't know if your teacher wanted you to calculate the integrals first and then use Taylor series.

Related

Finding the minimum distance from a point to a curve

I need to find the minimum distance from a point (X,Y) to a curve defined by four coefficients C0, C1, C2, C3 like y = C0 + C1X + C2X^2 + C3X^3
I have used a numerical approach using np.linspace and np.polyval to generate discrete (X,Y) for the curve and then the shapely 's Point, MultiPoint and nearest_points to find the nearest points, and finally np.linalg.norm to find the distance.
This is a numerical approach by discretizing the curve.
My question is how can I find the distance by analytical methods and code it?
Problem definition
For the sake of simplicity let's use P for the point and Px and Py for the coordinates. Let's call the function f(x).
An other way to look at you're problem is that you're trying to find an x that minimzes the distance between the P and the point (x, f(x))
The problem can then be formulated as a minimization problem.
Find x that minimizes (x-Px)² + (f(x)-Py)²
(Not that we can drop the square root that should be there because square root is a monotonic function and doesn't change the optima. Some details here.)
Analytical solution
The fully analytical way to solve this would be a pen and paper approach. You can develop the equation and compute the derivative, see where they cancel out to find out where extremums are (This will be a lengthy process to do analytically. #Yves Daoust addresses it in his answer. Either do that or use a numerical solver for this part. For example a version of Newton's method should do). Then check if the extremums are maximums or minimums by computing the point and sampling a few points around to check how the function is evolving. From this you can find where the global minimum is and that gives you the x you're looking for. But developing this is content probably better suited for mathematics.
Optimization approach
So instead I'm gonna suggest a solution that uses numerical minimization that doesn't use a sampling approach. You can use the minimize function from scipy to solve the minimization problem.
from math import pow
from scipy.optimize import minimize
# Define function
C0 = -1
C1 = 5
C2 = -5
C3 = 6
f = lambda x: C0 + C1 * x + C2 * pow(x, 2) + C3 * pow(x, 3)
# Define function to minimize
p_x = 12
p_y = -7
min_f = lambda x: pow(x-p_x, 2) + pow(f(x) - p_y, 2)
# Minimize
min_res = minimze(min_f, 0) # The starting point doesn't really matter here
# Show result
print("Closest point is x=", min_res.x[0], " y=", f(min_res.x[0]))
Here I used your function with dummy values but you could use any function you want with this approach.
You need to differentiate (x - X)² + (C0 + C1 x + C2 x² + C3 x³ - Y)² and find the roots. But this is a quintic polynomial (fifth degree) with general coefficients so the Abel-Ruffini theorem fully applies, meaning that there is no solution in radicals.
There is a known solution anyway, by reducing the equation (via a lengthy substitution process) to the form x^5 - x + t = 0 known as the Bring–Jerrard normal form, and getting the solutions (called ultraradicals) by means of the elliptic functions of Hermite or evaluation of the roots by Taylor.
Personal note:
This approach is virtually foolish, as there exist ready-made numerical polynomial root-finders, and the ultraradical function is uneasy to evaluate.
Anyway, looking at the plot of x^5 - x, one can see that it is intersected once or three times by and horizontal, and finding an interval with a change of sign is easy. With that, you can obtain an accurate root by dichotomy (and far from the extrema, Newton will easily converge).
After having found this root, you can deflate the polynomial to a quartic, for which explicit formulas by radicals are known.

Python: How can I test for a relatively straight line in a series of cubic lines?

I have a collection of curved lines, representing the third degree polynomial line of best fit for some datasets.
I want to differentiate relatively flat lines, filtering these plots, for further analyses.
For example I want to filter subplots 20935, 21004, 21010, 18761, 21037.
How can I do this, with a list of floats as input for these lines?
(using Python 3.8, Numpy, Math, mathplotlib in an anaconda env)
If you have got a list of xs and their respective ys, you can compute the slope for each point and check if the slope is always a constant value.
threshold = 0.001 # add your precision here. zero indicates a perfect straight line
is_straight_line = True
slope = (y[1]-y[0]) / (x[1] - x[0])
for i, (xval, yval) in enumerate(zip(x[2:], y[2:])):
s = (yval - y[i-1]) / (xval - x[i-1])
if abs(s - slope) > threshold:
is_straight_line = False
break
print(is_straight_line)
if you need the computation to be efficient, you should consider using numpy instead.
Knowledge of first-year calculus is assumed. There's a geometric property called "curvature" that basically determines how much a shape bends at a certain point (really the inverse of the radius of the osculating circle at that point).
We can use this link to develop a formula for a cubic function with coefficients [a, b, c, d] at x = x.
def cubic_curvature(a, b, c, d, x):
k = abs(6*a*x + 2*b) / (1 + (3*a*x**2 + 2*b*x + c)**2) ** 1.5
return k
More general algorithms can be created for any polynomial, possibly with assistance from the sympy library depending on your needs.
With this in mind, you can set some threshold for curvature that determines whether the cubic is "straight" enough given its coefficients (I believe scipy or similar should be able to give you these from a list of points) and the x-value to be evaluated at (try the median independent variable).

Alternating direction implicit method for finite difference solver of pde in Python

I am working on implementing the Alternating direction implicit method to solve FitzHugh–Nagumo reaction diffusion model. I have found a Python implementation example for it in a blog, but I think there is an error in the method - in the stencil presented here:
Shouldn't it be half time step size multiplying the reaction term f ?
Replacing the difference quotients by the differential quotients, one gets
U_t = D/2 * U_xx + D/2 * U_yy + Δt*f
in both instances, which is not the equation
U_t = D * (U_xx + U_yy) + f
that was the originally posed task.
So the coefficients should be 1/(Δt/2) as it was at U_t, D/(Δp^2) at U_pp, p=x,y and 1 for f.
It seems the formula is a mix-up of the one with difference quotients and the next stage where it gets multiplied by Δt/2.
And in that next formula one does not need new constants as indeed α_p=σ_p, p=x,y and then you are right that the factor of f should be Δt/2.

Stochastic integration with python

I want to numerically solve integrals that contain white noise.
Mathematically white noise can be described by a variable X(t), which is a random variable with a time average, Avg[X(t)] = 0 and the correlation function, Avg[X(t), X(t')] = delta_distribution(t-t').
A simple example would be to calculate the integral over X(t) from t=0 to t=1. On average this is of course zero, but what I need are different realizations of this integral.
The problem is that this does not work with numpy.integrate.quad().
Are there any packages for python that deal with stochastic integrals?
This is a good starting point for numerical SDE methods: http://math.gmu.edu/~tsauer/pre/sde.pdf.
Here is a simple numpy solver for the stochastic differential equation dX_t = a(t,X_t)dt + b(t,X_t)dW_t which I wrote for a class project last year. It is based on the forward euler method for regular differential equations, and in practice is fairly widely used when solving SDEs.
def euler_maruyama(a,b,x0,t):
N = len(t)
x = np.zeros((N,len(x0)))
x[0] = x0
for i in range(N-1):
dt = t[i+1]-t[i]
dWt = np.random.normal(0,dt)
x[i+1] = x[i] + a(t[i],x[i])*dt + b(t[i],x[i])*dWt
return x
Essentially, at each timestep, the deterministic part of the function is integrated using forward Euler, and the stochastic part is integrated by generating a normal random variable dWt with mean 0 and variance dt and integrating the stochastic part with respect to this.
The reason we generate dWt like this is based on the definition of Brownian motions. In particular, if $W$ is a Brownian motion, then $(W_t-W_s)$ is normally distributed with mean 0 and variance $t-s$. So dWt is a discritization of the change in $W$ over a small time interval.
This is a the docstring from the function above:
Parameters
----------
a : callable a(t,X_t),
t is scalar time and X_t is vector position
b : callable b(t,X_t),
where t is scalar time and X_t is vector position
x0 : ndarray
the initial position
t : ndarray
list of times at which to evaluate trajectory
Returns
-------
x : ndarray
positions of trajectory at each time in t

On ordinary differential equations (ODE) and optimization, in Python

I want to solve this kind of problem:
dy/dt = 0.01*y*(1-y), find t when y = 0.8 (0<t<3000)
I've tried the ode function in Python, but it can only calculate y when t is given.
So are there any simple ways to solve this problem in Python?
PS: This function is just a simple example. My real problem is so complex that can't be solve analytically. So I want to know how to solve it numerically. And I think this problem is more like an optimization problem:
Objective function y(t) = 0.8, Subject to dy/dt = 0.01*y*(1-y), and 0<t<3000
PPS: My real problem is:
objective function: F(t) = 0.85,
subject to: F(t) = sqrt(x(t)^2+y(t)^2+z(t)^2),
x''(t) = (1/F(t)-1)*250*x(t),
y''(t) = (1/F(t)-1)*250*y(t),
z''(t) = (1/F(t)-1)*250*z(t)-10,
x(0) = 0, y(0) = 0, z(0) = 0.7,
x'(0) = 0.1, y'(0) = 1.5, z'(0) = 0,
0<t<5
This differential equation can be solved analytically quite easily:
dy/dt = 0.01 * y * (1-y)
rearrange to gather y and t terms on opposite sides
100 dt = 1/(y * (1-y)) dy
The lhs integrates trivially to 100 * t, rhs is slightly more complicated. We can always write a product of two quotients as a sum of the two quotients * some constants:
1/(y * (1-y)) = A/y + B/(1-y)
The values for A and B can be worked out by putting the rhs on the same denominator and comparing constant and first order y terms on both sides. In this case it is simple, A=B=1. Thus we have to integrate
1/y + 1/(1-y) dy
The first term integrates to ln(y), the second term can be integrated with a change of variables u = 1-y to -ln(1-y). Our integrated equation therefor looks like:
100 * t + C = ln(y) - ln(1-y)
not forgetting the constant of integration (it is convenient to write it on the lhs here). We can combine the two logarithm terms:
100 * t + C = ln( y / (1-y) )
In order to solve t for an exact value of y, we first need to work out the value of C. We do this using the initial conditions. It is clear that if y starts at 1, dy/dt = 0 and the value of y never changes. Thus plug in the values for y and t at the beginning
100 * 0 + C = ln( y(0) / (1 - y(0) )
This will give a value for C (assuming y is not 0 or 1) and then use y=0.8 to get a value for t. Note that because of the logarithm and the factor 100 multiplying t y will reach 0.8 within a relatively short range of t values, unless the initial value of y is incredibly small. It is of course also straightforward to rearrange the equation above to express y in terms of t, then you can plot the function as well.
Edit: Numerical integration
For a more complexed ODE which cannot be solved analytically, you will have to try numerically. Initially we only know the value of the function at zero time y(0) (we have to know at least that in order to uniquely define the trajectory of the function), and how to evaluate the gradient. The idea of numerical integration is that we can use our knowledge of the gradient (which tells us how the function is changing) to work out what the value of the function will be in the vicinity of our starting point. The simplest way to do this is Euler integration:
y(dt) = y(0) + dy/dt * dt
Euler integration assumes that the gradient is constant between t=0 and t=dt. Once y(dt) is known, the gradient can be calculated there also and in turn used to calculate y(2 * dt) and so on, gradually building up the complete trajectory of the function. If you are looking for a particular target value, just wait until the trajectory goes past that value, then interpolate between the last two positions to get the precise t.
The problem with Euler integration (and with all other numerical integration methods) is that its results are only accurate when its assumptions are valid. Because the gradient is not constant between pairs of time points, a certain amount of error will arise for each integration step, which over time will build up until the answer is completely inaccurate. In order to improve the quality of the integration, it is necessary to use more sophisticated approximations to the gradient. Check out for example the Runge-Kutta methods, which are a family of integrators which remove progressive orders of error term at the cost of increased computation time. If your function is differentiable, knowing the second or even third derivatives can also be used to reduce the integration error.
Fortunately of course, somebody else has done the hard work here, and you don't have to worry too much about solving problems like numerical stability or have an in depth understanding of all the details (although understanding roughly what is going on helps a lot). Check out http://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.ode.html#scipy.integrate.ode for an example of an integrator class which you should be able to use straightaway. For instance
from scipy.integrate import ode
def deriv(t, y):
return 0.01 * y * (1 - y)
my_integrator = ode(deriv)
my_integrator.set_initial_value(0.5)
t = 0.1 # start with a small value of time
while t < 3000:
y = my_integrator.integrate(t)
if y > 0.8:
print "y(%f) = %f" % (t, y)
break
t += 0.1
This code will print out the first t value when y passes 0.8 (or nothing if it never reaches 0.8). If you want a more accurate value of t, keep the y of the previous t as well and interpolate between them.
As an addition to Krastanov`s answer:
Aside of PyDSTool there are other packages, like Pysundials and Assimulo which provide bindings to the solver IDA from Sundials. This solver has root finding capabilites.
Use scipy.integrate.odeint to handle your integration, and analyse the results afterward.
import numpy as np
from scipy.integrate import odeint
ts = np.arange(0,3000,1) # time series - start, stop, step
def rhs(y,t):
return 0.01*y*(1-y)
y0 = np.array([1]) # initial value
ys = odeint(rhs,y0,ts)
Then analyse the numpy array ys to find your answer (dimensions of array ts matches ys). (This may not work first time because I am constructing from memory).
This might involve using the scipy interpolate function for the ys array, such that you get a result at time t.
EDIT: I see that you wish to solve a spring in 3D. This should be fine with the above method; Odeint on the scipy website has examples for systems such as coupled springs that can be solved for, and these could be extended.
What you are asking for is a ODE integrator with root finding capabilities. They exist and the low-level code for such integrators is supplied with scipy, but they have not yet been wrapped in python bindings.
For more information see this mailing list post that provides a few alternatives: http://mail.scipy.org/pipermail/scipy-user/2010-March/024890.html
You can use the following example implementation which uses backtracking (hence it is not optimal as it is a bolt-on addition to an integrator that does not have root finding on its own): https://github.com/scipy/scipy/pull/4904/files

Categories