Related
This is more of a computational physics problem, and I've asked it on physics stack exchange, but no answers on there. This is, I suppose, a mix of the disciplines on here and there (and maybe even mathematics stack exchange), so finding the right place to post is a task in of itself apparently...
I'm attempting to use Crank-Nicolson scheme to solve the TDSE in 1D. The initial wave is a real Gaussian that has been normalised wrt its probability density. As the solution evolves, a depression grows in the central peak of the real part of the wave, and the imaginary part's central trough is perhaps a bit higher than I expect (image below).
Does this behaviour seem reasonable? I have searched around and not seen questions/figures that are similar. I've tested another person's code from Github and it exhibits the same behaviour, which makes me feel a bit better. But I still think the center peak should just decrease in height and increase in width. The likelihood of me getting a physics-based explanation is relatively low here I'd assume, but a computational-based explanation on errors I may have made is more likely.
I'm happy to give more information, for example my code, or the matrices used in the scheme, etc. Thanks in advance!
Here's a link to GIF of time evolution:
And the part of my code relevant to solving the 1D TDSE:
(pretty much the entire thing except the plotting)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# Define function for norm.
def normf(dxc, uc, ic):
return sum(dxc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of position.
def xexpf(dxc, xc, uc, ic):
return sum(dxc * xc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of squared position.
def xexpsf(dxc, xc, uc, ic):
return sum(dxc * np.square(xc) * np.square(np.abs(uc[ic, :])))
# Define function for standard deviation.
def sdaf(xexpc, xexpsc, ic):
return np.sqrt(xexpsc[ic] - np.square(xexpc[ic]))
# Time t: t0 =< t =< tf. Have N steps at which to evaluate the CN scheme. The
# time interval is dt. decp: variable for plotting to certain number of decimal
# places.
t0 = 0
tf = 20
N = 200
dt = tf / N
t = np.linspace(t0, tf, num = N + 1, endpoint = True)
decp = str(dt)[::-1].find('.')
# Initialise array for filling with norm values at each time step.
norm = np.zeros(len(t))
# Initialise array for expectation value of position.
xexp = np.zeros(len(t))
# Initialise array for expectation value of squared position.
xexps = np.zeros(len(t))
# Initialise array for alternate standard deviation.
sda = np.zeros(len(t))
# Position x: -a =< x =< a. M is an even number. There are M + 1 total discrete
# positions, for the points to be symmetric and centred at x = 0.
a = 100
M = 1200
dx = (2 * a) / M
x = np.linspace(-a, a, num = M + 1, endpoint = True)
# The gaussian function u diffuses over time. sd sets the width of gaussian. u0
# is the initial gaussian at t0.
sd = 1
var = np.power(sd, 2)
mu = 0
u0 = np.sqrt(1 / np.sqrt(np.pi * var)) * np.exp(-np.power(x - mu, 2) / (2 * \
var))
u = np.zeros([len(t), len(x)], dtype = 'complex_')
u[0, :] = u0
# Normalise u.
u[0, :] = u[0, :] / np.sqrt(normf(dx, u, 0))
# Set coefficients of CN scheme.
alpha = dt * -1j / (4 * np.power(dx, 2))
beta = dt * 1j / (4 * np.power(dx, 2))
# Tridiagonal matrices Al and AR. Al to be solved using Thomas algorithm.
Al = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Al[i + 1, i] = alpha
Al[i, i] = 1 - (2 * alpha)
Al[i, i + 1] = alpha
# Corner elements for BC's.
Al[M, M], Al[0, 0] = 1 - alpha, 1 - alpha
Ar = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Ar[i + 1, i] = beta
Ar[i, i] = 1 - (2 * beta)
Ar[i, i + 1] = beta
# Corner elements for BC's.
Ar[M, M], Ar[0, 0] = 1 - 2*beta, 1 - beta
# Thomas algorithm variables. Following similar naming as in Wiki article.
a = np.diag(Al, -1)
b = np.diag(Al)
c = np.diag(Al, 1)
NT = len(b)
cp = np.zeros(NT - 1, dtype = 'complex_')
for n in range(0, NT - 1):
if n == 0:
cp[n] = c[n] / b[n]
else:
cp[n] = c[n] / (b[n] - (a[n - 1] * cp[n - 1]))
d = np.zeros(NT, dtype = 'complex_')
dp = np.zeros(NT, dtype = 'complex_')
# Iterate over each time step to solve CN method. Maintain boundary
# conditions. Keep track of standard deviation.
for i in range(0, N):
# BC's.
u[i, 0], u[i, M] = 0, 0
# Find RHS.
d = np.dot(Ar, u[i, :])
for n in range(0, NT):
if n == 0:
dp[n] = d[n] / b[n]
else:
dp[n] = (d[n] - (a[n - 1] * dp[n - 1])) / (b[n] - (a[n - 1] * \
cp[n - 1]))
nc = NT - 1
while nc > -1:
if nc == NT - 1:
u[i + 1, nc] = dp[nc]
nc -= 1
else:
u[i + 1, nc] = dp[nc] - (cp[nc] * u[i + 1, nc + 1])
nc -= 1
norm[i] = normf(dx, u, i)
xexp[i] = xexpf(dx, x, u, i)
xexps[i] = xexpsf(dx, x, u, i)
sda[i] = sdaf(xexp, xexps, i)
# Fill in final norm value.
norm[N] = normf(dx, u, N)
# Fill in final position expectation value.
xexp[N] = xexpf(dx, x, u, N)
# Fill in final squared position expectation value.
xexps[N] = xexpsf(dx, x, u, N)
# Fill in final standard deviation value.
sda[N] = sdaf(xexp, xexps, N)
I have two solutions of the current equation:
The first one is using Finite difference scheme ( code below ):
# Some variable declarations
nx = 300
ny = 300
nt = 100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
dx = (xmax - xmin) / (nx - 1)
dy = (ymax - ymin) / (ny - 1)
# Initialization
p = np.zeros((nx, ny))
pd = np.zeros((nx, ny))
b = np.zeros((nx, ny))
# Source
b[int(nx / 4), int(ny / 4)] = 100
b[int(3 * nx / 4), int(3 * ny / 4)] = -100
for it in range(nt):
pd = p.copy()
p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 +
(pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 -
b[1:-1, 1:-1] * dx**2 * dy**2) /
(2 * (dx**2 + dy**2)))
p[0, :] = 0
p[nx-1, :] = 0
p[:, 0] = 0
p[:, ny-1] = 0
Using FFT I have the following code:
def poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy):
p = np.zeros((nptx,npty))
ppad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
phatpad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
bpad = np.zeros((nptx+nboundaryx,npty+nboundaryy))
bpad[:nptx,:npty] = b
kxpad = 2*np.pi*np.fft.fftfreq(nptx+nboundaryx,d=dx)
kypad = 2*np.pi*np.fft.fftfreq(npty+nboundaryy,d=dy)
epsilon = 1.e-9
ppad = np.real(np.fft.ifft2(-np.fft.fft2(bpad)/np.maximum(kxpad[None, :]**2 + kypad[:, None]**2,epsilon)))
p = ppad[:nptx,:npty]
p[0,:] = 0
p[nptx-1,:] = 0
p[:,0] = 0
p[:,npty-1] = 0
return p
nptx = 300
npty = 300
b = np.zeros((nptx, npty))
b[int(nptx / 4), int(npty / 4)] = 100
b[int(3 * nptx / 4), int(3 * npty / 4)] = -100
xmin = 0.
xmax = 2.
ymin = 0.
ymax = 1.
nboundaryx = 0
nboundaryy = 0
dx = (xmax - xmin) / (nptx+nboundaryx - 1)
dy = (ymax - ymin) / (npty+nboundaryy - 1)
print(dx)
p = poisson(b,nptx,npty,dx,dy,nboundaryx,nboundaryy)
The results are:
First image using Finite Difference
Second image using FFT
I know using FD scheme is correct but not sure if I did in FFT correctly. I see a round shape on FFT, is this correct?
There are two main differences
For the finite differences you are calculating the discrete differences, for the FFT solution you are simply computing the poison operator on the continuous space and applying that to your equation. To compute the finite differences exactly the same way you would need to use the in the discrete domain instead of calculating the fft what you can do is to remember that fft(roll(x, 1)) = exp(-2j * np.pi * np.fftfreq(N))* fft(x) where roll denotes the circular shift by oen sample.
Other point is that you are using boundary conditions (zero potential on the walls) the quick and dirty solution is to use method of image charges to ensure the potential vanishes on the walls and compute the poison equation on the augmented space. If you care about memory usage or solution purity you could use the sine transform that has slightly more complicated translation formulas, but can be computed without augmenting the space since the potential is forced to be zero on the boundaries by its definition (because sin(pi * n) = 0 for any integer n)
The solution in the frequency domain is a direct solution, you calculate each coefficient with a closed formula and then perform the inverse Fourier transform, no iteration is required. The accuracy tends to be good as well, as far as you compute the differences with enough accuracy.
If you are really worried about this, you should focus in differences like (1 - exp(2j*pi/N)) because the second term is close to 1, the number of significative bits will be reduced. But you can improve the accuracy of such expressions by factoring it as exp(1j*pi/N) * (exp(-1j*pi/N) - exp(1j*pi/N)) = exp(1j*pi/N) * (-2j * sin(pi/N)) where you have a product and you don't loose any significant bit. All of this is more important if you are compluting it in single or half precision (you probably will not notice any rounding error using numpy.float64 or numpy.complex128).
If you calculate in the frequency domain and you are not happy with the accuracy you can always "refine" it with some iterations of your finite differences equation.
To do some simulations in Python, I'm trying to generate numbers a,b,c such that a^2 + b^2 + c^2 = 1. I think generating some a between 0 and 1, then some b between 0 and sqrt(1 - a^2), and then c = sqrt(1 - a^2 - b^2) would work.
Floating point values are fine, the sum of squares should be close to 1. I want to keep generating them for some iterations.
Being new to Python, I'm not really sure how to do this. Negatives are allowed.
Edit: Thanks a lot for the answers!
According to this answer at stats.stackexchange.com, you should use normally distributed values to get uniformly distributed values on a sphere. That would mean, you could do:
import numpy as np
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
Just in case your interested in the probability densities I decided to do a comparison between the different approaches:
import numpy as np
import random
import math
def MSeifert():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
def VBB():
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
return x[0], x[1], x[2]
def user3684792():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
def JohanL():
abc = np.random.normal(size=3)
a,b,c = abc/np.sqrt(sum(abc**2))
return a, b, c
def SeverinPappadeux():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
And plotting the distributions:
%matplotlib notebook
import matplotlib.pyplot as plt
f, axes = plt.subplots(3, 4)
for func_idx, func in enumerate([MSeifert, JohanL, user3684792, VBB]):
axes[0, func_idx].set_title(str(func.__name__))
res = [func() for _ in range(50000)]
for idx in range(3):
axes[idx, func_idx].hist([i[idx] for i in res], bins='auto')
axes[0, 0].set_ylabel('a')
axes[1, 0].set_ylabel('b')
axes[2, 0].set_ylabel('c')
plt.tight_layout()
With the result:
Explanation: The rows show the distributions for a, b and c respectively while the columns show the histograms (distributions) of the different approaches.
The only approaches that give a uniformly random distribution in the range (-1, 1) are JohanLs and Severin Pappadeux's approach. All other approaches have some features like spikes or a functional behavior in the range [0, 1). Note that these two solution currently gives values between -1 and 1 while all other approaches give values between 0 and 1.
I think it is actually a cool problem, and a nice way to do this is to just use spherical polar coordinates and generate the angles at random.
import random
import numpy as np
def random_pt():
theta = random.uniform(0, 0.5*np.pi)
phi = random.uniform(0, 0.5*np.pi)
return np.sin(theta)* np.cos(phi), np.sin(theta)*np.sin(phi), np.cos(theta)
You could do it like this:
import random
import math
def three_random_numbers_adding_to_one():
a = 1
b = 1
while a**2 + b**2 > 1: # discard any a and b whose sum of squares already exceeds 1
a = random.random()
b = random.random()
c = math.sqrt(1 - a**2 - b**2) # fixed c
return a, b, c
a, b, c = three_random_numbers_adding_to_one()
print(a**2 + b**2 + c**2)
However floats have only limited precision so these won't add to exactly 1, just approximately.
You may need to check if the numbers generated with this function are "random enough". It could be that this setup biases the "randomness".
The "right" answer depends on whether you are looking for a uniform random distribution in space, or on the surface of a sphere, or something else. If you are looking for points on the surface of a sphere, you still have to worry about the cos(theta) factor which will cause points to appear "bunched up" near the poles of the sphere. Since exact nature is not clear from your question, here is a "totally random" distribution that should work:
x = np.random.uniform(0,1,3) # random numbers in [0, 1)
x /= np.sqrt(x[0] ** 2 + x[1] ** 2 + x[2] ** 2)
Another advantage here is that since we are using numpy arrays, you can quickly scale to large sets of points too, by using x = np.random.uniform(0, 1, (3, n)) for any n.
Time to add another solution, heh...
This time it is truly uniform on the unit sphere point picking - check http://mathworld.wolfram.com/SpherePointPicking.html for details
import math
import random
def random_pt():
cos_th = 2.0*random.uniform(0, 1.0) - 1.0
sin_th = math.sqrt(1.0 - cos_th*cos_th)
phi = random.uniform(0, 2.0*math.pi)
return sin_th * math.cos(phi), sin_th * math.sin(phi), cos_th
for k in range(0, 100):
a, b, c = random_pt()
print("{0} {1} {2} {3}".format(a, b, c, a*a + b*b + c*c))
Now I face some problem when I use scipy.integrate.ode.
I want to use spectral method (fourier transform) solve a PDE including dispersive and convection term, such as
du/dt = A * d^3 u / dx^3 + C * du/dx
Then from fourier transform this PDE will convert to a set of ODEs in complex space (uk is complex vector)
duk/dt = (A * coeff^3 + C * coeff) * uk
coeff = (2 * pi * i * k) / L
k is wavenumber, (e.g.. k = 0, 1, 2, 3, -4, -3, -2, -1)
i^2 = -1,
L is length of domain.
When I use r = ode(uODE).set_integrator('zvode', method='adams'), python will warn like:
c ZVODE-- At current T (=R1), MXSTEP (=I1) steps
taken on this call before reaching TOUT
In above message, I1 = 500
In above message, R1 = 0.2191432098050D+00
I feel it is because the time step I chosen is too large, however I cannot decrease time step as every step is time consuming for my real problem. Do I have any other way to resolve this problem?
Did you consider solving the ODEs symbolically? With Sympy you can type
import sympy as sy
sy.init_printing() # use IPython for better results
from sympy.abc import A, C, c, x, t # variables
u = sy.Function(b'u')(x,t)
eq = sy.Eq(u.diff(t), c*u)
sl1 = sy.pde.pdsolve(eq, u)
print("The solution of:")
sy.pprint(eq)
print("was determined to be:")
sy.pprint(sl1)
print("")
print("Substituting the coefficient:")
k,L = sy.symbols("k L", real=True)
coeff = (2 * sy.pi * sy.I * k) / L
cc = (A * coeff**3 + C * coeff)
sl2 = sy.simplify(sl1.replace(c, cc))
sy.pprint(sl2)
gives the following output:
The solution of:
∂
──(u(x, t)) = c⋅u(x, t)
∂t
was determined to be:
c⋅t
u(x, t) = F(x)⋅ℯ
Substituting the coefficient:
⎛ 2 2 2⎞
-2⋅ⅈ⋅π⋅k⋅t⋅⎝4⋅π ⋅A⋅k - C⋅L ⎠
──────────────────────────────
3
L
u(x, t) = F(x)⋅ℯ
Note that F(x) depends on your initial values of u(x,t=0), which you need to provide.
Use sl2.rhs.evalf() to substitute in numbers.
This is a follow up to How to set up and solve simultaneous equations in python but I feel deserves its own reputation points for any answer.
For a fixed integer n, I have a set of 2(n-1) simultaneous equations as follows.
M(p) = 1+((n-p-1)/n)*M(n-1) + (2/n)*N(p-1) + ((p-1)/n)*M(p-1)
N(p) = 1+((n-p-1)/n)*M(n-1) + (p/n)*N(p-1)
M(1) = 1+((n-2)/n)*M(n-1) + (2/n)*N(0)
N(0) = 1+((n-1)/n)*M(n-1)
M(p) is defined for 1 <= p <= n-1. N(p) is defined for 0 <= p <= n-2. Notice also that p is just a constant integer in every equation so the whole system is linear.
Some very nice answers were given for how to set up a system of equations in python. However, the system is sparse and I would like to solve it for large n. How can I use scipy's sparse matrix representation and http://docs.scipy.org/doc/scipy/reference/sparse.linalg.html for example instead?
I wouldn't normally keep beating a dead horse, but it happens that my non-vectorized approach to solving your other question, has some merit when things get big. Because I was actually filling the coefficient matrix one item at a time, it is very easy to translate into COO sparse matrix format, which can efficiently be transformed to CSC and solved. The following does it:
import scipy.sparse
def sps_solve(n) :
# Solution vector is [N[0], N[1], ..., N[n - 2], M[1], M[2], ..., M[n - 1]]
n_pos = lambda p : p
m_pos = lambda p : p + n - 2
data = []
row = []
col = []
# p = 0
# n * N[0] + (1 - n) * M[n-1] = n
row += [n_pos(0), n_pos(0)]
col += [n_pos(0), m_pos(n - 1)]
data += [n, 1 - n]
for p in xrange(1, n - 1) :
# n * M[p] + (1 + p - n) * M[n - 1] - 2 * N[p - 1] +
# (1 - p) * M[p - 1] = n
row += [m_pos(p)] * (4 if p > 1 else 3)
col += ([m_pos(p), m_pos(n - 1), n_pos(p - 1)] +
([m_pos(p - 1)] if p > 1 else []))
data += [n, 1 + p - n , -2] + ([1 - p] if p > 1 else [])
# n * N[p] + (1 + p -n) * M[n - 1] - p * N[p - 1] = n
row += [n_pos(p)] * 3
col += [n_pos(p), m_pos(n - 1), n_pos(p - 1)]
data += [n, 1 + p - n, -p]
if n > 2 :
# p = n - 1
# n * M[n - 1] - 2 * N[n - 2] + (2 - n) * M[n - 2] = n
row += [m_pos(n-1)] * 3
col += [m_pos(n - 1), n_pos(n - 2), m_pos(n - 2)]
data += [n, -2, 2 - n]
else :
# p = 1
# n * M[1] - 2 * N[0] = n
row += [m_pos(n - 1)] * 2
col += [m_pos(n - 1), n_pos(n - 2)]
data += [n, -2]
coeff_mat = scipy.sparse.coo_matrix((data, (row, col))).tocsc()
return scipy.sparse.linalg.spsolve(coeff_mat,
np.ones(2 * (n - 1)) * n)
It is of course much more verbose than building it from vectorized blocks, as TheodorosZelleke does, but an interesting thing happens when you time both approaches:
First, and this is (very) nice, time is scaling linearly in both solutions, as one would expect from using the sparse approach. But the solution I gave in this answer is always faster, more so for larger ns. Just for the fun of it, I also timed TheodorosZelleke's dense approach from the other question, which gives this nice graph showing the different scaling of both types of solutions, and how very early, somewhere around n = 75, the solution here should be your choice:
I don't know enough about scipy.sparse to really figure out why the differences between the two sparse approaches, although I suspect heavily of the use of LIL format sparse matrices. There may be some very marginal performance gain, although a lot of compactness in the code, by turning TheodorosZelleke's answer into COO format. But that is left as an exercise for the OP!
This is a solution using scipy.sparse. Unfortunately the problem is not stated here. So in order to comprehend this solution, future visitors have to first look up the problem under the link provided in the question.
Solution using scipy.sparse:
from scipy.sparse import spdiags, lil_matrix, vstack, hstack
from scipy.sparse.linalg import spsolve
import numpy as np
def solve(n):
nrange = np.arange(n)
diag = np.ones(n-1)
# upper left block
n_to_M = spdiags(-2. * diag, 0, n-1, n-1)
# lower left block
n_to_N = spdiags([n * diag, -nrange[-1:0:-1]], [0, 1], n-1, n-1)
# upper right block
m_to_M = lil_matrix(n_to_N)
m_to_M[1:, 0] = -nrange[1:-1].reshape((n-2, 1))
# lower right block
m_to_N = lil_matrix((n-1, n-1))
m_to_N[:, 0] = -nrange[1:].reshape((n-1, 1))
# build A, combine all blocks
coeff_mat = hstack(
(vstack((n_to_M, n_to_N)),
vstack((m_to_M, m_to_N))))
# const vector, right side of eq.
const = n * np.ones((2 * (n-1),1))
return spsolve(coeff_mat.tocsr(), const).reshape((-1,1))
There's some code that I've looked at before here: http://jkwiens.com/heat-equation-using-finite-difference/ His function implements a finite difference method to solve the heat equation using the scipy sparse matrix package.