I have two solutions of the current equation:
The first one is using Finite difference scheme ( code below ):
# Some variable declarations
nx = 300
ny = 300
nt = 100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
dx = (xmax - xmin) / (nx - 1)
dy = (ymax - ymin) / (ny - 1)
# Initialization
p = np.zeros((nx, ny))
pd = np.zeros((nx, ny))
b = np.zeros((nx, ny))
# Source
b[int(nx / 4), int(ny / 4)] = 100
b[int(3 * nx / 4), int(3 * ny / 4)] = -100
for it in range(nt):
pd = p.copy()
p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
(pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
b[1:-1, 1:-1] * dx**2 * dy**2) /
(2 * (dx**2 + dy**2)))
p[0, :] = 0
p[nx-1, :] = 0
p[:, 0] = 0
p[:, ny-1] = 0
Using FFT I have the following code:
def poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy):
p = np.zeros((nptx,npty))
ppad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
phatpad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
bpad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
bpad[:nptx,:npty] = b
kxpad = 2*np.pi*np.fft.fftfreq(nptx+nboundaryx,d=dx)
kypad = 2*np.pi*np.fft.fftfreq(npty+nboundaryy,d=dy)
epsilon = 1.e-9
ppad = np.real(np.fft.ifft2(-np.fft.fft2(bpad)/np.maximum(kxpad[None, :]**2 + kypad[:, None]**2,epsilon)))
p = ppad[:nptx,:npty]
p[0,:] = 0
p[nptx-1,:] = 0
p[:,0] = 0
p[:,npty-1] = 0
return p
nptx = 300
npty = 300
b = np.zeros((nptx, npty))
b[int(nptx / 4), int(npty / 4)] = 100
b[int(3 * nptx / 4), int(3 * npty / 4)] = -100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
nboundaryx = 0
nboundaryy = 0
dx = (xmax - xmin) / (nptx+nboundaryx - 1)
dy = (ymax - ymin) / (npty+nboundaryy - 1)
print(dx)
p = poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy)
The results are:
First image using Finite Difference
Second image using FFT
I know using FD scheme is correct but not sure if I did in FFT correctly. I see a round shape on FFT, is this correct?
There are two main differences
For the finite differences you are calculating the discrete differences, for the FFT solution you are simply computing the poison operator on the continuous space and applying that to your equation. To compute the finite differences exactly the same way you would need to use the in the discrete domain instead of calculating the fft what you can do is to remember that fft(roll(x, 1)) = exp(-2j * np.pi * np.fftfreq(N))* fft(x) where roll denotes the circular shift by oen sample.
Other point is that you are using boundary conditions (zero potential on the walls) the quick and dirty solution is to use method of image charges to ensure the potential vanishes on the walls and compute the poison equation on the augmented space. If you care about memory usage or solution purity you could use the sine transform that has slightly more complicated translation formulas, but can be computed without augmenting the space since the potential is forced to be zero on the boundaries by its definition (because sin(pi * n) = 0 for any integer n)
The solution in the frequency domain is a direct solution, you calculate each coefficient with a closed formula and then perform the inverse Fourier transform, no iteration is required. The accuracy tends to be good as well, as far as you compute the differences with enough accuracy.
If you are really worried about this, you should focus in differences like (1 - exp(2j*pi/N)) because the second term is close to 1, the number of significative bits will be reduced. But you can improve the accuracy of such expressions by factoring it as exp(1j*pi/N) * (exp(-1j*pi/N) - exp(1j*pi/N)) = exp(1j*pi/N) * (-2j * sin(pi/N)) where you have a product and you don't loose any significant bit. All of this is more important if you are compluting it in single or half precision (you probably will not notice any rounding error using numpy.float64 or numpy.complex128).
If you calculate in the frequency domain and you are not happy with the accuracy you can always "refine" it with some iterations of your finite differences equation.
Related
This is more of a computational physics problem, and I've asked it on physics stack exchange, but no answers on there. This is, I suppose, a mix of the disciplines on here and there (and maybe even mathematics stack exchange), so finding the right place to post is a task in of itself apparently...
I'm attempting to use Crank-Nicolson scheme to solve the TDSE in 1D. The initial wave is a real Gaussian that has been normalised wrt its probability density. As the solution evolves, a depression grows in the central peak of the real part of the wave, and the imaginary part's central trough is perhaps a bit higher than I expect (image below).
Does this behaviour seem reasonable? I have searched around and not seen questions/figures that are similar. I've tested another person's code from Github and it exhibits the same behaviour, which makes me feel a bit better. But I still think the center peak should just decrease in height and increase in width. The likelihood of me getting a physics-based explanation is relatively low here I'd assume, but a computational-based explanation on errors I may have made is more likely.
I'm happy to give more information, for example my code, or the matrices used in the scheme, etc. Thanks in advance!
Here's a link to GIF of time evolution:
And the part of my code relevant to solving the 1D TDSE:
(pretty much the entire thing except the plotting)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# Define function for norm.
def normf(dxc, uc, ic):
return sum(dxc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of position.
def xexpf(dxc, xc, uc, ic):
return sum(dxc * xc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of squared position.
def xexpsf(dxc, xc, uc, ic):
return sum(dxc * np.square(xc) * np.square(np.abs(uc[ic, :])))
# Define function for standard deviation.
def sdaf(xexpc, xexpsc, ic):
return np.sqrt(xexpsc[ic] - np.square(xexpc[ic]))
# Time t: t0 =< t =< tf. Have N steps at which to evaluate the CN scheme. The
# time interval is dt. decp: variable for plotting to certain number of decimal
# places.
t0 = 0
tf = 20
N = 200
dt = tf / N
t = np.linspace(t0, tf, num = N + 1, endpoint = True)
decp = str(dt)[::-1].find('.')
# Initialise array for filling with norm values at each time step.
norm = np.zeros(len(t))
# Initialise array for expectation value of position.
xexp = np.zeros(len(t))
# Initialise array for expectation value of squared position.
xexps = np.zeros(len(t))
# Initialise array for alternate standard deviation.
sda = np.zeros(len(t))
# Position x: -a =< x =< a. M is an even number. There are M + 1 total discrete
# positions, for the points to be symmetric and centred at x = 0.
a = 100
M = 1200
dx = (2 * a) / M
x = np.linspace(-a, a, num = M + 1, endpoint = True)
# The gaussian function u diffuses over time. sd sets the width of gaussian. u0
# is the initial gaussian at t0.
sd = 1
var = np.power(sd, 2)
mu = 0
u0 = np.sqrt(1 / np.sqrt(np.pi * var)) * np.exp(-np.power(x - mu, 2) / (2 * \
var))
u = np.zeros([len(t), len(x)], dtype = 'complex_')
u[0, :] = u0
# Normalise u.
u[0, :] = u[0, :] / np.sqrt(normf(dx, u, 0))
# Set coefficients of CN scheme.
alpha = dt * -1j / (4 * np.power(dx, 2))
beta = dt * 1j / (4 * np.power(dx, 2))
# Tridiagonal matrices Al and AR. Al to be solved using Thomas algorithm.
Al = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Al[i + 1, i] = alpha
Al[i, i] = 1 - (2 * alpha)
Al[i, i + 1] = alpha
# Corner elements for BC's.
Al[M, M], Al[0, 0] = 1 - alpha, 1 - alpha
Ar = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Ar[i + 1, i] = beta
Ar[i, i] = 1 - (2 * beta)
Ar[i, i + 1] = beta
# Corner elements for BC's.
Ar[M, M], Ar[0, 0] = 1 - 2*beta, 1 - beta
# Thomas algorithm variables. Following similar naming as in Wiki article.
a = np.diag(Al, -1)
b = np.diag(Al)
c = np.diag(Al, 1)
NT = len(b)
cp = np.zeros(NT - 1, dtype = 'complex_')
for n in range(0, NT - 1):
if n == 0:
cp[n] = c[n] / b[n]
else:
cp[n] = c[n] / (b[n] - (a[n - 1] * cp[n - 1]))
d = np.zeros(NT, dtype = 'complex_')
dp = np.zeros(NT, dtype = 'complex_')
# Iterate over each time step to solve CN method. Maintain boundary
# conditions. Keep track of standard deviation.
for i in range(0, N):
# BC's.
u[i, 0], u[i, M] = 0, 0
# Find RHS.
d = np.dot(Ar, u[i, :])
for n in range(0, NT):
if n == 0:
dp[n] = d[n] / b[n]
else:
dp[n] = (d[n] - (a[n - 1] * dp[n - 1])) / (b[n] - (a[n - 1] * \
cp[n - 1]))
nc = NT - 1
while nc > -1:
if nc == NT - 1:
u[i + 1, nc] = dp[nc]
nc -= 1
else:
u[i + 1, nc] = dp[nc] - (cp[nc] * u[i + 1, nc + 1])
nc -= 1
norm[i] = normf(dx, u, i)
xexp[i] = xexpf(dx, x, u, i)
xexps[i] = xexpsf(dx, x, u, i)
sda[i] = sdaf(xexp, xexps, i)
# Fill in final norm value.
norm[N] = normf(dx, u, N)
# Fill in final position expectation value.
xexp[N] = xexpf(dx, x, u, N)
# Fill in final squared position expectation value.
xexps[N] = xexpsf(dx, x, u, N)
# Fill in final standard deviation value.
sda[N] = sdaf(xexp, xexps, N)
I have a square matrix of size 8*8. Some of terms are a function of frequency(omega). I want to write a function which searches for eigenfrequencies in a given range like (0 - 1kHz).
I have included the function below. Here the terms 'tx', 'ki1', 'ki2' are function of omega. For finding eigenfrequencies, the determinant of matrix should be zero. But I can't find determinant of matrix if all values are not given.
Basically, I don't want to give a frequency and then get eigenvalues.
I want the matrix to be solved to give eigenvalues which will be eigenfrequencies.
Can you please suggest any method or function for that?
Any help or suggestion would be appreciated.
Thanks in advance!
import numpy as np
def mat(l1,l2,omega1):
kmat = np.zeros((8,8), dtype = complex)
ki1 = omega1 / c1
ki2 = omega1 / c2
tx = 1 + n * np.exp( -1j * omega1 *tau)
kmat[0][0] = -1
kmat[0][1] = 1
kmat[1][0] = - np.exp(- 1j * ki1 * l1) # simple duct
kmat[1][2] = 1
kmat[2][1] = - np.exp( 1j * ki1 * l1)
kmat[2][3] = 1
kmat[3][2] = tx # velocity coupling
kmat[3][3] = -tx
kmat[3][4] = -1
kmat[3][5] = 1
kmat[4][2] = 1
kmat[4][3] = -1
kmat[4][4] = -1
kmat[4][5] = -1
kmat[5][4] = - np.exp(- 1j * ki2 * l2)
kmat[5][6] = 1
kmat[6][5] = - np.exp( 1j * ki2 * l2)
kmat[6][7] = 1
kmat[7][6] = -1
kmat[7][7] = -1
return kmat
Why not do sort of bootstrapping to estimate the eigenvalues?
For each repetition fill the elements in the matrix which are not constant with f(omega) and find the eigenvalues. then sort them from largest to smallest
If the distribution of eigenvalues stays more or less the same across the repetitions - you have a fair enough estimate.
You don't say which project you are looking at the documentation for but sympy can do this:
In [1]: omega = Symbol('omega')
In [2]: M = Matrix([[1, omega], [omega, 1]])
In [3]: M
Out[3]:
⎡1 ω⎤
⎢ ⎥
⎣ω 1⎦
In [4]: M.eigenvals()
Out[4]: {1 - ω: 1, ω + 1: 1}
However you need to bear in mind that for a matrix larger than 4x4 with symbolic entries it is not always possible to obtain a "closed form" expression for the eigenvalues due to the Abel-Ruffini theorem:
https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem
I have the following data points: There are 5 sublists in this list of data. What I am trying to do is find the points where there is a maximum amount of curvature.
for i in range(len(smallest_5)):
x = [x for x,y in smallest_5[i]]
y = [y for x,y in smallest_5[i]]
plt.scatter(x,y)
plt.savefig('bend'+str(count)+'.png')
plt.show()
I've used this code to plot the points.
sub_curvature = []
for i in range(len(smallest_5)):
a = np.array(smallest_5[i])
dx_dt = np.gradient(a[:,0])
dy_dt = np.gradient(a[:,1])
velocity = np.array([ [dx_dt[i], dy_dt[i]] for i in range(dx_dt.size)])
ds_dt = np.sqrt(dx_dt * dx_dt + dy_dt * dy_dt)
tangent = np.array([1/ds_dt] * 2).transpose() * velocity
tangent_x = tangent[:, 0]
tangent_y = tangent[:, 1]
deriv_tangent_x = np.gradient(tangent_x)
deriv_tangent_y = np.gradient(tangent_y)
dT_dt = np.array([ [deriv_tangent_x[i], deriv_tangent_y[i]] for i in range(deriv_tangent_x.size)])
length_dT_dt = np.sqrt(deriv_tangent_x * deriv_tangent_x + deriv_tangent_y * deriv_tangent_y)
normal = np.array([1/length_dT_dt] * 2).transpose() * dT_dt
d2s_dt2 = np.gradient(ds_dt)
d2x_dt2 = np.gradient(dx_dt)
d2y_dt2 = np.gradient(dy_dt)
curvature = np.abs(d2x_dt2 * dy_dt - dx_dt * d2y_dt2) / (dx_dt * dx_dt + dy_dt * dy_dt)**1.5
t_component = np.array([d2s_dt2] * 2).transpose()
n_component = np.array([curvature * ds_dt * ds_dt] * 2).transpose()
acceleration = t_component * tangent + n_component * normal
sub_curvature.append(curvature)
I used the code above to calculate the curvature of individual points on the data.
Above are some of the graphs I created using the data. As you can see, the first one has no real bend but the last two have a point where there is a large bend. How could I go about identifying this region? Is it correct to calculate the curvature for individual points or should I look at the curvature over a sliding window of points? Thank you!
If we assume "curvature" to mean circular curvature, then you'll need a sliding window over 3 points (since 3 points determine a circle).
For any three points (a,b,c) the curvature is 2 * |(a-b) x (b-c)| / (|a-b| * |b-c| * |c-b|).
We can get a-b and b-c from
ab = smallest_5[1:] - smallest_5[:-1]
and a-c from:
ac = smallest_5[2:] - smallest_5[:-2]
Then the squared curvature is:
curv_sq = 4 * (np.cross(ab[1:], ab[:-1])**2).sum() / ((ab[1:]**2).sum() * (ab[:-1]**2).sum() * (ac**2).sum())
Since we're just looking for a maximum curvature, we don't actually have to take the square root of that. We can find the index of the point with maximum curvature with
max_curv_index = np.argmax(curv_sq)
As an idea, you can find the minimum y which is not the first or the last value in the y-dimension of the array. For example:
s4 = np.array(smallest_5[4]).T # exctract a sub-array
min_y = np.agrmin(s4[1]) # gives 13
min_y == (0 or len(s4[1]-1) # gives False, so the minimum is in the middle of the curve
s0 = np.array(smallest_5[0]).T # exctract a sub-array
min_y = np.agrmin(s0[1]) # gives 16
min_y == (0 or len(s0[1]-1) # gives True, so the minimum is not in the middle of the curve
Similar to many tutorials on the web, I've tried implementing a windowed-sinc lowpass filter using the following python functions:
def black_wind(w):
''' blackman window of width w'''
samps = np.arange(w)
return (0.42 - 0.5 * np.cos(2 * np.pi * samps/ (w-1)) + 0.08 * np.cos(4 * np.pi * samps/ (w-1)))
def lp_win_sinc(tw, fc, n):
''' lowpass sinc impulse response
Parameters:
tw = approximate transition width [fraction of nyquist freq]
fc = cutoff freq [fraction of nyquest freq]
n = length of output.
Returns:
s = impulse response of windowed-sinc filter appended zero-padding
to make len(s) = n
'''
m = int(np.ceil( 4./tw / 2) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * black_wind(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return s
For input: 'tw = 0.05', 'fc = 0.2', 'n = 6000', the magnitude of the fft seems reasonable.
tw = 0.05
fc = 0.2
n = 6000
lp = lp_win_sinc(tw, fc, n)
f_lp = np.fft.rfft(lp)
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.abs(f_lp))
magnitude of lowpass filter response
however, the phase is non-linear above ~fc.
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.unwrap(np.angle(f_lp)))
phase of lowpass filter response
Given the symmetry of the non-zero-padded portion of the impulse response, I would expect the resulting phase to be linear. Can someone explain what is going on? Perhaps I'm using a numpy function incorrectly, or maybe my expectations are incorrect. I'm very grateful for any help.
***********************EDIT***********************
based on some of the helpful comments to this question and some more work, I wrote a function that produces zero phase delay and is therefore a bit easier to interpret the np.angle() results.
def lp_win_sinc(tw, fc, n):
m = int(np.ceil( 2./tw) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * np.blackman(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return np.roll(s, -m/2)
The main change here is using np.roll() to place the line of symmetry at t=0.
The magnitudes in the stop band are crossing zero. The phase of the coefficient after a zero crossing will jump by 180 degrees, which is confusing np.angle()/np.unwrap(). -1*180° = 1*0°
The phase as shown in your graph is in fact linear. It's a constant slope in the passband, corresponding to a constant delay in the time domain. It's a much steeper slope, which renders as wrapping around at 2pi boundaries, in the stopband. But the value of the phase in the stopband is not particularly important since those frequencies aren't going to come through the filter anyway.
I have an iterative model in Python which generates at signal using a function which contains a derivative. As the model iterates the signal becomes noisy - I suspect it may be an issue with computing the numerical derivative. I've tried to smooth this by applying a low-pass filter, convolving the noisy signal with a Gaussian kernel. I use the code snippet:
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(current, window, mode='same') / np.sum(window)
where current is my signal, and gaussian and convolve have been imported from scipy. This seems to give a slight improvement, and the first 2 or 3 iterations appear very smooth. However, after that the signal becomes extremely noisy again, despite the fact that the low-pass filter is positioned inside the iterative loop.
Can anyone suggest where I might be going wrong or how I could better tackle this problem? Thanks.
EDIT: As suggested I've included the code I'm using below. At 5 iterations the noise on the signal is clearly apparent.
import numpy as np
from scipy import special
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.signal import convolve
from scipy.signal import gaussian
# Constants
B = 426400E-9 # tesla
R = 71723E3
Rkm = R / 1000.
Omega = 1.75e-4 #8.913E-4 # rads/s
period = (2. * np.pi / Omega) / 3600. # Gets period in hours
Bj = 2.0 * B
mdot = 1000.
sigmapstar = 0.05
# Create rhoe array
rho0 = 5.* R
rho1 = 100. * R
rhoe = np.linspace(rho0, rho1, 2.E5)
# Define flux function and z component of equatorial field strength
Fe = B * R**3 / rhoe
Bze = B * (R/rhoe)**3
def derivs(u, rhoe, p):
"""Computes the derivative"""
wOmegaJ = u
Bj, sigmapstar, mdot, B, R = p
# Compute the derivative of w/omegaJ wrt rhoe (**Fe and Bjz have been subbed)
dwOmegaJ = (((8.0*np.pi*sigmapstar*B**2 * (R**6)) / (mdot * rhoe**5)) \
*(1.0-wOmegaJ) - (2.*wOmegaJ/rhoe))
res = dwOmegaJ
return res
its = 5 # number of iterations to perform
i = 0
# Loop to iterate
while i < its:
# Define the initial condition of rigid corotation
wOmegaJ_0 = 1
params = [Bj, sigmapstar, mdot, B, R]
init = wOmegaJ_0
# Compute numerical solution to Hill eqn
u = odeint(derivs, init, rhoe, args=(params,))
wOmega = u[:,0]
# Calculate I_rho
i_rho = 8. * np.pi * sigmapstar * Omega * Fe * ( 1. - wOmega)
dx = rhoe[1] - rhoe[0]
differential = np.gradient(i_rho, dx)
jpara = 1. * differential / (4 * np.pi * rhoe * Bze )
jpari = 2. * B * para
# Remove infinity and NaN values)
jpari[~np.isfinite(jpari)] = 0.0
# Convolve to smooth curve
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(jpari, window, mode='same') /np.sum(window)
jpari = filtered
# Pedersen conductivity as function of jpari
sigmapstar0 = 0.05
jstar = 0.01e-6
jstarstar = 0.25e-6
s1 = 0.1e6#0.1e6 # (Am^-2)^-1
s2 = 9.9e6 # (Am^-2)^-1
n = 8.
# Calculate news sigmapstar. Realistic conductivity
sigmapstarNew = sigmapstar0 + 0.5 * (s1 + s2/(1 + (jpari/jstarstar)**n)**(1./n)) * (np.sqrt(jpari**2 + jstar**2) + jpari)
sigmapstarNew = sigmapstarNew
diff = np.abs(sigmapstar - sigmapstarNew) / sigmapstar * 100
diff = max(diff)
sigmapstar = 0.5* sigmapstar + 0.5* sigmapstarNew # Weighted averaging
i += 1
print diff
# Plot jpari
ax = plt.subplot(111)
ax.plot(rhoe/R, jpari * 1e6)
ax.axhline(0, ls=':')
ax.set_xlabel(r'$\rho_e / R_{UCD}$')
ax.set_ylabel(r'$j_{\parallel i} $ / $ \mu$ A m$^{-2}$')
ax.set_xlim([0,80])
ax.set_ylim(-0.01,0.01)
plt.locator_params(nbins=5)
plt.draw()
plt.show()