I'm writing the prorgram on python that can approximate time series by sin waves.
The program uses DFT to find sin waves, after that it chooses sin waves with biggest amplitudes.
Here's my code:
__author__ = 'FATVVS'
import math
# Wave - (amplitude,frequency,phase)
# This class was created to sort sin waves:
# - by anplitude( set freq_sort=False)
# - by frequency (set freq_sort=True)
class Wave:
#flag for choosing sort mode:
# False-sort by amplitude
# True-by frequency
freq_sort = False
def __init__(self, amp, freq, phase):
self.freq = freq #frequency
self.amp = amp #amplitude
self.phase = phase
def __lt__(self, other):
if self.freq_sort:
return self.freq < other.freq
else:
return self.amp < other.amp
def __gt__(self, other):
if self.freq_sort:
return self.freq > other.freq
else:
return self.amp > other.amp
def __le__(self, other):
if self.freq_sort:
return self.freq <= other.freq
else:
return self.amp <= other.amp
def __ge__(self, other):
if self.freq_sort:
return self.freq >= other.freq
else:
return self.amp >= other.amp
def __str__(self):
s = "(amp=" + str(self.amp) + ",frq=" + str(self.freq) + ",phase=" + str(self.phase) + ")"
return s
def __repr__(self):
return self.__str__()
#Discrete Fourier Transform
def dft(series: list):
n = len(series)
m = int(n / 2)
real = [0 for _ in range(n)]
imag = [0 for _ in range(n)]
amplitude = []
phase = []
angle_const = 2 * math.pi / n
for w in range(m):
a = w * angle_const
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / n)
phase.append(math.atan(imag[w] / real[w]))
return amplitude, phase
#extract waves from time series
# series - time series
# num - number of waves
def get_waves(series: list, num):
amp, phase = dft(series)
m = len(amp)
waves = []
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
waves.sort()
waves.reverse()
waves = waves[0:num]#extract best waves
print("the program found the next %s sin waves:"%(num))
print(waves)#print best waves
return waves
#approximation by sin waves
#series - time series
#num- number of sin waves
def sin_waves_appr(series: list, num):
n = len(series)
freq = get_waves(series, num)
m = len(freq)
model = []
for i in range(n):
summ = 0
for j in range(m): #sum by sin waves
summ += freq[j].amp * math.sin(freq[j].freq * i + freq[j].phase)
model.append(summ)
return model
if __name__ == '__main__':
import matplotlib.pyplot as plt
N = 500 # length of time series
num = 2 # number of sin wawes, that we want to find
#y - generate time series
y = [2 * math.sin(0.05 * t + 0.5) + 0.5 * math.sin(0.2 * t + 1.5) for t in range(N)]
model = sin_waves_appr(y, num) #generate approximation model
## ------------------plotting-----------------
plt.figure(1)
# plotting of time series and his approximation model
plt.subplot(211)
h_signal, = plt.plot(y, label='source timeseries')
h_model, = plt.plot(model, label='model', linestyle='--')
plt.legend(handles=[h_signal, h_model])
plt.grid()
# plotting of spectre
amp, _ = dft(y)
xaxis = [2*math.pi*i / N for i in range(len(amp))]
plt.subplot(212)
h_freq, = plt.plot(xaxis, amp, label='spectre')
plt.legend(handles=[h_freq])
plt.grid()
plt.show()
But I've got a strange result:
In the program I've created a time series from two sin waves:
y = [2 * math.sin(0.05 * t + 0.5) + 0.5 * math.sin(0.2 * t + 1.5) for t in range(N)]
And my program found wrong parameters of the sin waves:
the program found the next 2 sin waves:
[(amp=0.9998029885151699,frq=0.10053096491487339,phase=1.1411803525843616), (amp=0.24800925225626422,frq=0.40212385965949354,phase=0.346757128184013)]
I suppuse, that my problem is wrong scaling of wave parameters, but I'm not sure.
There're two places, where the program does scaling. The first place is creating of waves:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
And the second place is sclaling of the x-axis:
xaxis = [2*math.pi*i / N for i in range(len(amp))]
But my suppose may be wrong. I've tried to change scaling many times, and it haven't solved my problem.
What may be wrong with the code?
So, these lines I believe are wrong:
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / n)
phase.append(math.atan(imag[w] / real[w]))
I believe it should be dividing by m instead of n, since you are only working with computing half the points. That will fix the amplitude problem. Also, the computation of imag[w] is missing a negative sign. Taking into account the atan2 fix, it would look like:
for t in range(n):
real[w] += series[t] * math.cos(a * t)
imag[w] += -1 * series[t] * math.sin(a * t)
amplitude.append(math.sqrt(real[w] * real[w] + imag[w] * imag[w]) / m)
phase.append(math.atan2(imag[w], real[w]))
The next one is here:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / m, phase[i]))
The divide by m is not right. amp has only half the points it should, so using the length of amp isn't right here. It should be:
for i in range(m):
waves.append(Wave(amp[i], 2 * math.pi * i / (m * 2), phase[i]))
Finally, your model reconstruction has a problem:
for j in range(m): #sum by sin waves
summ += freq[j].amp * math.sin(freq[j].freq * i + freq[j].phase)
It should use cosine instead (sine introduces a phase shift):
for j in range(m): #sum by cos waves
summ += freq[j].amp * math.cos(freq[j].freq * i + freq[j].phase)
When I fix all of that, the model and the DFT both make sense:
Related
Can anyone tell me what is wrong with this code? It is from https://jakevdp.github.io/blog/2012/09/05/quantum-python/ .
Everything in it worked out except the title of the plot.I can't figure it out.
It should look like this
but when the code is run, it polts this
Here is the code given:-
"""
General Numerical Solver for the 1D Time-Dependent Schrodinger's equation.
author: Jake Vanderplas
email: vanderplas#astro.washington.edu
website: http://jakevdp.github.com
license: BSD
Please feel free to use and modify this, but keep the above information. Thanks!
"""
import numpy as np
from matplotlib import pyplot as pl
from matplotlib import animation
from scipy.fftpack import fft,ifft
class Schrodinger(object):
"""
Class which implements a numerical solution of the time-dependent
Schrodinger equation for an arbitrary potential
"""
def __init__(self, x, psi_x0, V_x,
k0 = None, hbar=1, m=1, t0=0.0):
"""
Parameters
----------
x : array_like, float
length-N array of evenly spaced spatial coordinates
psi_x0 : array_like, complex
length-N array of the initial wave function at time t0
V_x : array_like, float
length-N array giving the potential at each x
k0 : float
the minimum value of k. Note that, because of the workings of the
fast fourier transform, the momentum wave-number will be defined
in the range
k0 < k < 2*pi / dx
where dx = x[1]-x[0]. If you expect nonzero momentum outside this
range, you must modify the inputs accordingly. If not specified,
k0 will be calculated such that the range is [-k0,k0]
hbar : float
value of planck's constant (default = 1)
m : float
particle mass (default = 1)
t0 : float
initial tile (default = 0)
"""
# Validation of array inputs
self.x, psi_x0, self.V_x = map(np.asarray, (x, psi_x0, V_x))
N = self.x.size
assert self.x.shape == (N,)
assert psi_x0.shape == (N,)
assert self.V_x.shape == (N,)
# Set internal parameters
self.hbar = hbar
self.m = m
self.t = t0
self.dt_ = None
self.N = len(x)
self.dx = self.x[1] - self.x[0]
self.dk = 2 * np.pi / (self.N * self.dx)
# set momentum scale
if k0 == None:
self.k0 = -0.5 * self.N * self.dk
else:
self.k0 = k0
self.k = self.k0 + self.dk * np.arange(self.N)
self.psi_x = psi_x0
self.compute_k_from_x()
# variables which hold steps in evolution of the
self.x_evolve_half = None
self.x_evolve = None
self.k_evolve = None
# attributes used for dynamic plotting
self.psi_x_line = None
self.psi_k_line = None
self.V_x_line = None
def _set_psi_x(self, psi_x):
self.psi_mod_x = (psi_x * np.exp(-1j * self.k[0] * self.x)
* self.dx / np.sqrt(2 * np.pi))
def _get_psi_x(self):
return (self.psi_mod_x * np.exp(1j * self.k[0] * self.x)
* np.sqrt(2 * np.pi) / self.dx)
def _set_psi_k(self, psi_k):
self.psi_mod_k = psi_k * np.exp(1j * self.x[0]
* self.dk * np.arange(self.N))
def _get_psi_k(self):
return self.psi_mod_k * np.exp(-1j * self.x[0] *
self.dk * np.arange(self.N))
def _get_dt(self):
return self.dt_
def _set_dt(self, dt):
if dt != self.dt_:
self.dt_ = dt
self.x_evolve_half = np.exp(-0.5 * 1j * self.V_x
/ self.hbar * dt )
self.x_evolve = self.x_evolve_half * self.x_evolve_half
self.k_evolve = np.exp(-0.5 * 1j * self.hbar /
self.m * (self.k * self.k) * dt)
psi_x = property(_get_psi_x, _set_psi_x)
psi_k = property(_get_psi_k, _set_psi_k)
dt = property(_get_dt, _set_dt)
def compute_k_from_x(self):
self.psi_mod_k = fft(self.psi_mod_x)
def compute_x_from_k(self):
self.psi_mod_x = ifft(self.psi_mod_k)
def time_step(self, dt, Nsteps = 1):
"""
Perform a series of time-steps via the time-dependent
Schrodinger Equation.
Parameters
----------
dt : float
the small time interval over which to integrate
Nsteps : float, optional
the number of intervals to compute. The total change
in time at the end of this method will be dt * Nsteps.
default is N = 1
"""
self.dt = dt
if Nsteps > 0:
self.psi_mod_x *= self.x_evolve_half
for i in xrange(Nsteps - 1):
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve
self.compute_k_from_x()
self.psi_mod_k *= self.k_evolve
self.compute_x_from_k()
self.psi_mod_x *= self.x_evolve_half
self.compute_k_from_x()
self.t += dt * Nsteps
######################################################################
# Helper functions for gaussian wave-packets
def gauss_x(x, a, x0, k0):
"""
a gaussian wave packet of width a, centered at x0, with momentum k0
"""
return ((a * np.sqrt(np.pi)) ** (-0.5)
* np.exp(-0.5 * ((x - x0) * 1. / a) ** 2 + 1j * x * k0))
def gauss_k(k,a,x0,k0):
"""
analytical fourier transform of gauss_x(x), above
"""
return ((a / np.sqrt(np.pi))**0.5
* np.exp(-0.5 * (a * (k - k0)) ** 2 - 1j * (k - k0) * x0))
######################################################################
# Utility functions for running the animation
def theta(x):
"""
theta function :
returns 0 if x<=0, and 1 if x>0
"""
x = np.asarray(x)
y = np.zeros(x.shape)
y[x > 0] = 1.0
return y
def square_barrier(x, width, height):
return height * (theta(x) - theta(x - width))
######################################################################
# Create the animation
# specify time steps and duration
dt = 0.01
N_steps = 50
t_max = 120
frames = int(t_max / float(N_steps * dt))
# specify constants
hbar = 1.0 # planck's constant
m = 1.9 # particle mass
# specify range in x coordinate
N = 2 ** 11
dx = 0.1
x = dx * (np.arange(N) - 0.5 * N)
# specify potential
V0 = 1.5
L = hbar / np.sqrt(2 * m * V0)
a = 3 * L
x0 = -60 * L
V_x = square_barrier(x, a, V0)
V_x[x < -98] = 1E6
V_x[x > 98] = 1E6
# specify initial momentum and quantities derived from it
p0 = np.sqrt(2 * m * 0.2 * V0)
dp2 = p0 * p0 * 1./80
d = hbar / np.sqrt(2 * dp2)
k0 = p0 / hbar
v0 = p0 / m
psi_x0 = gauss_x(x, d, x0, k0)
# define the Schrodinger object which performs the calculations
S = Schrodinger(x=x,
psi_x0=psi_x0,
V_x=V_x,
hbar=hbar,
m=m,
k0=-28)
######################################################################
# Set up plot
fig = pl.figure()
# plotting limits
xlim = (-100, 100)
klim = (-5, 5)
# top axes show the x-space data
ymin = 0
ymax = V0
ax1 = fig.add_subplot(211, xlim=xlim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_x_line, = ax1.plot([], [], c='r', label=r'$|\psi(x)|$')
V_x_line, = ax1.plot([], [], c='k', label=r'$V(x)$')
center_line = ax1.axvline(0, c='k', ls=':',
label = r"$x_0 + v_0t$")
title = ax1.set_title("")
ax1.legend(prop=dict(size=12))
ax1.set_xlabel('$x$')
ax1.set_ylabel(r'$|\psi(x)|$')
# bottom axes show the k-space data
ymin = abs(S.psi_k).min()
ymax = abs(S.psi_k).max()
ax2 = fig.add_subplot(212, xlim=klim,
ylim=(ymin - 0.2 * (ymax - ymin),
ymax + 0.2 * (ymax - ymin)))
psi_k_line, = ax2.plot([], [], c='r', label=r'$|\psi(k)|$')
p0_line1 = ax2.axvline(-p0 / hbar, c='k', ls=':', label=r'$\pm p_0$')
p0_line2 = ax2.axvline(p0 / hbar, c='k', ls=':')
mV_line = ax2.axvline(np.sqrt(2 * V0) / hbar, c='k', ls='--',
label=r'$\sqrt{2mV_0}$')
ax2.legend(prop=dict(size=12))
ax2.set_xlabel('$k$')
ax2.set_ylabel(r'$|\psi(k)|$')
V_x_line.set_data(S.x, S.V_x)
######################################################################
# Animate plot
def init():
psi_x_line.set_data([], [])
V_x_line.set_data([], [])
center_line.set_data([], [])
psi_k_line.set_data([], [])
title.set_text("")
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
def animate(i):
S.time_step(dt, N_steps)
psi_x_line.set_data(S.x, 4 * abs(S.psi_x))
V_x_line.set_data(S.x, S.V_x)
center_line.set_data(2 * [x0 + S.t * p0 / m], [0, 1])
psi_k_line.set_data(S.k, abs(S.psi_k))
title.set_text("t = %.2f" % S.t)
return (psi_x_line, V_x_line, center_line, psi_k_line, title)
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=frames, interval=30, blit=True)
# uncomment the following line to save the video in mp4 format. This
# requires either mencoder or ffmpeg to be installed on your system
#anim.save('schrodinger_barrier.mp4', fps=15, extra_args=['-vcodec', 'libx264'])
pl.show()
For your ease I mention the lines of this code which I do suspect for the error are line number 238,247,255,286,287,296,297
Thanks in advance
The problem is resolved when blit=False, though it may slow down your animation.
Just quoting from a previous answer:
"Possible solutions are:
Put the title inside the axes.
Don't use blitting"
See: How to update plot title with matplotlib using animation?
You also need ffmpeg installed. There are other answers on stackoverflow that help you through that installation. But for this script, here are my recommended new lines you need to add, assuming you're using Windows:
pl.rcParams['animation.ffmpeg_path'] = r"PUT_YOUR_FULL_PATH_TO_FFMPEG_HERE\ffmpeg.exe"
Writer = animation.writers['ffmpeg']
Then adjust the anim.save() line to:
anim.save('schrodinger_barrier.mp4', writer=Writer(fps=15, codec='libx264'))
I am trying to implement ifft2 by computing the individual dot product instead of matrix mutliplication ( I understand it is computationlly extremely intensive). The image constructed from the individual dot product implementation is upside-down compare to the matrix multiplication-wise ifft2. In the code below rA is the data.
Code with matrix multiplication:
def DFT_matrix(N):
i, j = np.meshgrid(np.arange(N), np.arange(N))
omega = np.exp( - 2 * np.pi * 1j / N )
W = np.power( omega, i * j )/N
return W
def forPyhton(v1,v2):
weight=np.dot(v2,v1)
return weight
rA=slice_kspace[5,:,:]
slice7=np.fft.ifft2(rA)
slice7=np.fft.fftshift(slice7)
slices7Abs=np.abs(slice7)+1e-9
dftMtxM=np.conj(DFT_matrix(len(rA)))
dftMtxN=np.conj(DFT_matrix(len(rA[1])))
mA = dftMtxM # rA # dftMtxN
Individual dot producewise implementation:
def DFT_matrix(N):
i, j = np.meshgrid(np.arange(N), np.arange(N))
omega = np.exp( - 2 * np.pi * 1j / N )
W = np.power( omega, i * j )/N
return W
def forPyhton(v1,v2):
weight=np.dot(v2,v1)
return weight
rA=slice_kspace[5,:,:]
slice7=np.fft.ifft2(rA)
slice7=np.fft.fftshift(slice7)
slices7Abs=np.abs(slice7)+1e-9
dftMtxM=np.conj(DFT_matrix(len(rA)))
dftMtxN=np.conj(DFT_matrix(len(rA[1])))
#mA = dftMtxM # rA # dftMtxN
result=[]
for i in range(0,rA.shape[0]):
row1=[]
for j in range(0,dftMtxN.shape[1]):
scaleWeight=forPyhton(rA[i,:],dftMtxN[:,j])
row1.append(scaleWeight)
result.append(row1)
result=np.asarray(result)
mA=[]
for i in range(0,dftMtxM.shape[0]):
row2=[]
for j in range(0,result.shape[1]):
scaleWeight=forPyhton(dftMtxM[i,:],np.array(result[:,j]))
row2.append(scaleWeight)
mA.append(row2)
mm=np.amax(np.abs(mA))
mA=np.fft.fftshift(mA)
mAabs=np.abs(mA)+1e-9
The plotting of slices7Abs mA is done by
plt.figure(3)
plt.imshow(slices7Abs,cmap='gray',origin='lower')
plt.figure(4)
plt.imshow(mAabs,cmap='gray',origin='lower')
plt.show()
The figure 3 and figure 4 is same in case of method 1 i.e. for matrixwise multiplication but for the second case, with individual dot productwise implementation the image i.e figure 4 is upside-down. Any idea why the image is upside down?
I am trying to calculate g(x_(i+2)) from the value g(x_(i+1)) and g(x_i), i is an integer, assuming I(x) and s(x) are Gaussian function. If we know x_i = 100, then the summation from 0 to 100, I don't know how to handle g(x_i) with the subscript in python, knowing the first and second value, we can find the third value, after n cycle, we can find the nth value.
Equation:
code:
import numpy as np
from matplotlib import pyplot as p
from math import pi
def f_s(x, mu_s, sig_s):
ss = -np.power(x - mu_s, 2) / (2 * np.power(sig_s, 2))
return np.exp(ss) / (np.power(2 * pi, 2) * sig_s)
def f_i(x, mu_i, sig_i):
ii = -np.power(x - mu_i, 2) / (2 * np.power(sig_i, 2))
return np.exp(ii) / (np.power(2 * pi, 2) * sig_i)
# problems occur in this part
def g(x, m, mu_s, sig_s, mu_i, sig_i):
for i in range(1, m): # specify the number x, x_1, x_2, x_3 ......X_m
h = (x[i + 1] - x[i]) / e
for n in range(0, x[i]): # calculate summation
sum_f = (f_i(x[i], mu_i, sig_i) - f_s(x[i] - n, mu_s, sig_s) * g_x[n]) * np.conj(f_s(n +
x[i], mu_s, sig_s))
g_x[1] = 1 # initial value
g_x[2] = 5
g_x[i + 2] = h * sum_f + 2 * g_x[i + 1] - g_x[i]
return g_x[i + 2]
x = np.linspace(-10, 10, 10000)
e = 1
d = 0.01
m = 1000
mu_s = 2
sig_s = 1
mu_i = 1
sig_i = 1
p.plot(x, g(x, m, mu_s, sig_s, mu_i, sig_i))
p.legend()
p.show()
result:
I(x) and s(x)
I will try and explain exactly what's going on and my issue.
This is a bit mathy and SO doesn't support latex, so sadly I had to resort to images. I hope that's okay.
I don't know why it's inverted, sorry about that.
At any rate, this is a linear system Ax = b where we know A and b, so we can find x, which is our approximation at the next time step. We continue doing this until time t_final.
This is the code
import numpy as np
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
for l in range(2, 12):
N = 2 ** l #number of grid points
dx = 1.0 / N #space between grid points
dx2 = dx * dx
dt = dx #time step
t_final = 1
approximate_f = np.zeros((N, 1), dtype = np.complex)
approximate_g = np.zeros((N, 1), dtype = np.complex)
#Insert initial conditions
for k in range(N):
approximate_f[k, 0] = np.cos(tau * k * dx)
approximate_g[k, 0] = -i * np.sin(tau * k * dx)
#Create coefficient matrix
A = np.zeros((2 * N, 2 * N), dtype = np.complex)
#First row is special
A[0, 0] = 1 -3*i*dt
A[0, N] = ((2 * dt / dx2) + dt) * i
A[0, N + 1] = (-dt / dx2) * i
A[0, -1] = (-dt / dx2) * i
#Last row is special
A[N - 1, N - 1] = 1 - (3 * dt) * i
A[N - 1, N] = (-dt / dx2) * i
A[N - 1, -2] = (-dt / dx2) * i
A[N - 1, -1] = ((2 * dt / dx2) + dt) * i
#middle
for k in range(1, N - 1):
A[k, k] = 1 - (3 * dt) * i
A[k, k + N - 1] = (-dt / dx2) * i
A[k, k + N] = ((2 * dt / dx2) + dt) * i
A[k, k + N + 1] = (-dt / dx2) * i
#Bottom half
A[N :, :N] = A[:N, N:]
A[N:, N:] = A[:N, :N]
Ainv = np.linalg.inv(A)
#Advance through time
time = 0
while time < t_final:
b = np.concatenate((approximate_f, approximate_g), axis = 0)
x = np.dot(Ainv, b) #Solve Ax = b
approximate_f = x[:N]
approximate_g = x[N:]
time += dt
approximate_solution = np.concatenate((approximate_f, approximate_g), axis=0)
#Calculate the actual solution
actual_f = np.zeros((N, 1), dtype = np.complex)
actual_g = np.zeros((N, 1), dtype = np.complex)
for k in range(N):
actual_f[k, 0] = solution_f(t_final, k * dx)
actual_g[k, 0] = solution_g(t_final, k * dx)
actual_solution = np.concatenate((actual_f, actual_g), axis = 0)
print(np.sqrt(dx) * np.linalg.norm(actual_solution - approximate_solution))
It doesn't work. At least not in the beginning, it shouldn't start this slow. I should be unconditionally stable and converge to the right answer.
What's going wrong here?
The L2-norm can be a useful metric to test convergence, but isn't ideal when debugging as it doesn't explain what the problem is. Although your solution should be unconditionally stable, backward Euler won't necessarily converge to the right answer. Just like forward Euler is notoriously unstable (anti-dissipative), backward Euler is notoriously dissipative. Plotting your solutions confirms this. The numerical solutions converge to zero. For a next-order approximation, Crank-Nicolson is a reasonable candidate. The code below contains the more general theta-method so that you can tune the implicit-ness of the solution. theta=0.5 gives CN, theta=1 gives BE, and theta=0 gives FE.
A couple other things that I tweaked:
I selected a more appropriate time step of dt = (dx**2)/2 instead of dt = dx. That latter doesn't converge to the right solution using CN.
It's a minor note, but since t_final isn't guaranteed to be a multiple of dt, you weren't comparing solutions at the same time step.
With regards to your comment about it being slow: As you increase the spatial resolution, your time resolution needs to increase too. Even in your case with dt=dx, you have to perform a (1024 x 1024)*1024 matrix multiplication 1024 times. I didn't find this to take particularly long on my machine. I removed some unneeded concatenation to speed it up a bit, but changing the time step to dt = (dx**2)/2 will really bog things down, unfortunately. You could trying compiling with Numba if you are concerned with speed.
All that said, I didn't find tremendous success with the consistency of CN. I had to set N=2^6 to get anything at t_final=1. Increasing t_final makes this worse, decreasing t_final makes it better. Depending on your needs, you could looking into implementing TR-BDF2 or other linear multistep methods to improve this.
The code with a plot is below:
import numpy as np
import matplotlib.pyplot as plt
tau = 2 * np.pi
tau2 = tau * tau
i = complex(0,1)
def solution_f(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) + np.exp(tau * i * x) * np.exp((tau2 + 4) * i * t))
def solution_g(t, x):
return 0.5 * (np.exp(-tau * i * x) * np.exp((2 - tau2) * i * t) - np.exp(tau * i * x) *
np.exp((tau2 + 4) * i * t))
l=6
N = 2 ** l
dx = 1.0 / N
dx2 = dx * dx
dt = dx2/2
t_final = 1.
x_arr = np.arange(0,1,dx)
approximate_f = np.cos(tau*x_arr)
approximate_g = -i*np.sin(tau*x_arr)
H = np.zeros([2*N,2*N], dtype=np.complex)
for k in range(N):
H[k,k] = -3*i*dt
H[k,k+N] = (2/dx2+1)*i*dt
if k==0:
H[k,N+1] = -i/dx2*dt
H[k,-1] = -i/dx2*dt
elif k==N-1:
H[N-1,N] = -i/dx2*dt
H[N-1,-2] = -i/dx2*dt
else:
H[k,k+N-1] = -i/dx2*dt
H[k,k+N+1] = -i/dx2*dt
### Bottom half
H[N :, :N] = H[:N, N:]
H[N:, N:] = H[:N, :N]
### Theta method. 0.5 -> Crank Nicolson
theta=0.5
A = np.eye(2*N)+H*theta
B = np.eye(2*N)-H*(1-theta)
### Precompute for faster computations
mat = np.linalg.inv(A)#B
t = 0
b = np.concatenate((approximate_f, approximate_g))
while t < t_final:
t += dt
b = mat#b
approximate_f = b[:N]
approximate_g = b[N:]
approximate_solution = np.concatenate((approximate_f, approximate_g))
#Calculate the actual solution
actual_f = solution_f(t,np.arange(0,1,dx))
actual_g = solution_g(t,np.arange(0,1,dx))
actual_solution = np.concatenate((actual_f, actual_g))
plt.figure(figsize=(7,5))
plt.plot(x_arr,actual_f.real,c="C0",label=r"$Re(f_\mathrm{true})$")
plt.plot(x_arr,actual_f.imag,c="C1",label=r"$Im(f_\mathrm{true})$")
plt.plot(x_arr,approximate_f.real,c="C0",ls="--",label=r"$Re(f_\mathrm{num})$")
plt.plot(x_arr,approximate_f.imag,c="C1",ls="--",label=r"$Im(f_\mathrm{num})$")
plt.legend(loc=3,fontsize=12)
plt.xlabel("x")
plt.savefig("num_approx.png",dpi=150)
I am not going to go through all of your math, but I'm going to offer a suggestion.
The use of a direct calculation for fxx and gxx seems like a good candidate for being numerically unstable. Intuitively a first order method should be expected to make second order mistakes in the terms. Second order mistakes in the individual terms, after passing through that formula, wind up as constant order mistakes in the second derivative. Plus when your step size gets small, you are going to find that a quadratic formula makes even small roundoff mistakes turn into surprisingly large errors.
Instead I would suggest that you start by turning this into a first-order system of 4 functions, f, fx, g, and gx. And then proceed with backward's Euler on that system. Intuitively, with this approach, a first order method creates second order mistakes, which pass through a formula that creates first order mistakes of them. And now you are converging as you should from the start, and are also not as sensitive to propagation of roundoff errors.
I am new to python and don't understand how to use the libraries properly. I am trying to write a program to compute the Taylor Series Approximation of a function centered at 0 at a given x and n.
def fact(n): #function to calculate n!
if n <= 0:
return 1
else:
return n * fact(n - 1)
#h= 0.00000000001
#def derivative(f,x,n): #function that calculates the derivative of a
function at a specified x
# return (f(x + h) - f(x - h)) /(2 * h)
from sympy import *
x = symbols('x')
def taylor(f,x,n):
for i in range(0,n):
t = 0
t = t + ((diff(f,x,n))/(fact(n))) * (x ** n)
return t
taylor(sin(x),1/32,1)
Here is what works for me after I fixed stuff from your code. I used sympy factorial.
from sympy import *
x = symbols('x')
def taylor(f,x,x0, n):
t = 0
for i in range(0,n):
t = t + ((diff(f,x,i).subs(x,x0))/(factorial(i))) * (x ** i)
return t
pprint(taylor(sin(x),x,Rational(1,32),4))
and the answer I get is
3 2
x ⋅cos(1/32) x ⋅sin(1/32)
- ──────────── - ──────────── + x⋅cos(1/32) + sin(1/32)
6 2