Finite Difference Solution to Heat Equation - python

Practicing finite difference implementation and I cannot figure out why my solution looks so strange. Code taken from: http://people.bu.edu/andasari/courses/numericalpython/Week9Lecture15/PythonFiles/FTCS_DirichletBCs.py.
Note: I'm using this lecture example for the heat equation not the reaction-diffusion equation!
I haven't learned the relevant mathematics so this could be why!
My code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
import math as mth
from mpl_toolkits.mplot3d import Axes3D
import pylab as plb
import scipy as sp
import scipy.sparse as sparse
import scipy.sparse.linalg
# First start with diffusion equation with initial condition u(x, 0) = 4x - 4x^2 and u(0, t) = u(L, t) = 0
# First discretise the domain [0, L] X [0, T]
# Then discretise the derivatives
# Generate algorithm:
# 1. Compute initial condition for all i
# 2. For all n:
# 2i. Compute u_i^{n + 1} for internal space points
# 2ii. Set boundary values for i = 0 and i = N_x
M = 40 # number of grid points for space interval
N = 70 # '' '' '' '' '' time ''
x0 = 0
xL = 1 # unit grid differences
dx = (xL - x0) / (M - 1) # space step
t0 = 0
tF = 0.2
dt = (tF - t0) / (N - 1)
D = 0.3 # thermal diffusivity
a = D * dt / dx**2
# Create grid
tspan = np.linspace(t0, tF, N)
xspan = np.linspace(x0, xL, M)
# Initial matrix solution
U = np.zeros((M, N))
# Initial condition
U[:, 0] = 4*xspan - 4*xspan**2
# Boundary conditions
U[0, :] = 0
U[-1, 0] = 0
# Discretised derivative formula
for k in range(0, N-1):
for i in range(1, M-1):
U[i, k+1] = a * U[i-1, k] + (1 - 2 * a) * U[i, k] + a * U[i + 1, k]
X, T = np.meshgrid(tspan, xspan)
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(X, T, U, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_xticks([0, 0.05, 0.1, 0.15, 0.2])
ax.set_xlabel('Space')
ax.set_ylabel('Time')
ax.set_zlabel('U')
plt.tight_layout()
plt.show()
edit: Changed therm diff value to correct one.

The main problem is the time step length. If you look at the differential equation, the numerics become unstable for a>0.5. Translated this means for you that roughly N > 190. I get a nice picture if I increase your N to such value.
However, I thing somewhere the time and space axes are swapped (if you try to interpret the graph then, i.e. boundary conditions and expected dampening of profile over time). I cannot figure out right now why.
Edit: Actually, you swap T and X when you do meshgrid. This should work:
N = 200
...
...
T, X = np.meshgrid(tspan, xspan)
...
surf = ax.plot_surface(T, X, U, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
...
ax.set_xlabel('Time')
ax.set_ylabel('Space')

Related

Solving for most likely intersection of multiple multivariate Gaussians

Is there a performant way to directly solve for the most likely intersection point (X, Y) of several multivariable Gaussians?
I've seen a few posts here that have asked how to solve for the intersection between two Gaussians - the concept is familiar to me. Right now it's not obvious to me aside from iterating and solving for two distributions at a time.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
mus = [np.array([[0.3],[0.7]]),
np.array([[0.3],[0.2]]),
np.array([[1.5],[0.6]])]
covs = [np.array([[0.85, 0.3], [0.3, 0.25]]),
np.array([[0.7, -0.41], [-0.41, 0.25]]),
np.array([[0.5, 0.15], [0.15, 0.15]])]
cmaps = ["Reds", "Blues", "Greens"]
for m, cov, c in zip(mus, covs, cmaps):
cov_inv = np.linalg.inv(cov)
cov_det = np.linalg.det(cov)
x = np.linspace(-3, 3)
y = np.linspace(-3, 3)
X,Y = np.meshgrid(x,y)
coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
Z = coe * np.e ** (-0.5 * (cov_inv[0,0]*(X-m[0])**2 + (cov_inv[0,1] + cov_inv[1,0])*(X-m[0])*(Y-m[1]) + cov_inv[1,1]*(Y-m[1])**2))
plt.contour(X,Y,Z, cmap = c)
You can do a LOT better than iterating between 2 solutions at a time. Realize that at every (x, y) point, you have a Z value for all 3 curves, and at the 3-way intersecting point, they are all equal (or within tolerance). And at other points, if you take the lowest Z of the curves, and move towards the center (mu_x, mu_y) of that curve, you are moving in an improving direction.
The below is an iterative algorithm that does that. There is certainly some meat on the bone in terms of possible enhancements. Notably, you could incorporate a "tolerance" for stopping conditions easily, or do some weighted average of the 2 lower z values instead of just the lowest to get the movement vector, or tinker with a larger step size.
Anyhow, this converges very rapidly for many different test starting points.
Code:
import numpy as np
import matplotlib.pyplot as plt
class Curve:
# a convenience so we can avoid recomputations
def __init__(self, mu, cov_inv, cov_det):
self.mu = mu
self.cov_inv = cov_inv
self.cov_det = cov_det
self.coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
def z(self, x, y):
Z = self.coe * np.e ** (-0.5 * (self.cov_inv[0,0]*(x-self.mu[0])**2 + \
(self.cov_inv[0,1] + self.cov_inv[1,0])*(x-self.mu[0])*(y-self.mu[1]) + self.cov_inv[1,1]*(y-self.mu[1])**2))
return Z
mus = [np.array([[0.3],[0.7]]),
np.array([[0.3],[0.2]]),
np.array([[1.5],[0.6]])]
covs = [np.array([[0.85, 0.3], [0.3, 0.25]]),
np.array([[0.7, -0.41], [-0.41, 0.25]]),
np.array([[0.5, 0.15], [0.15, 0.15]])]
cmaps = ["Reds", "Blues", "Greens"]
curves = []
for m, cov, c in zip(mus, covs, cmaps):
cov_inv = np.linalg.inv(cov)
cov_det = np.linalg.det(cov)
x = np.linspace(-3, 3)
y = np.linspace(-3, 3)
X,Y = np.meshgrid(x,y)
coe = 1.0 / ((2 * np.pi)**2 * cov_det)**0.5
Z = coe * np.e ** (-0.5 * (cov_inv[0,0]*(X-m[0])**2 + (cov_inv[0,1] + cov_inv[1,0])*(X-m[0])*(Y-m[1]) + cov_inv[1,1]*(Y-m[1])**2))
plt.contour(X,Y,Z, cmap = c)
curves.append(Curve(m, cov_inv, cov_det))
# iterative algorithm...
pos = np.array((-1,2))
step_size = 0.1
num_steps = 100
footprints = [pos,]
for step in range(num_steps):
zs = [ (curves[i].z(*pos), i) for i in range(len(curves))]
zs.sort() # sort by z value, lowest will be first
c = curves[zs[0][1]] # the curve to move toward
vec = c.mu.T - pos
move_vec = vec * (step_size/np.linalg.norm(vec))
print(f'move: {move_vec} towards curve {zs[0][1]}')
pos = pos + move_vec
pos = pos.flatten()
# check to see if we have backtracked, if so, shorten the step
if len(footprints) > 1 and np.linalg.norm(pos - footprints[-2]) < step_size:
#print(f'norm: {np.linalg.norm(pos-footprints[-2])}')
step_size *= 0.5
footprints.append(pos)
plt.plot([t[0] for t in footprints], [t[1] for t in footprints], c='k', lw=2)
plt.show()
Plot:

Unit test between angle theta and random value g

I'm trying to create a unit tests figures between an angle theta and random value g which takes value (g from -1 to +1 ).
Also, q is random value which takes value from 0 to +1.
theta = (((1 + gg - ((1 - gg)/(1 - g + 2gq))**2)/(2*g)))
##########################
#unit tests for scattering angle and g factor
#++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
import numpy as np
from random import random, gauss
import matplotlib.pyplot as plt
g = np.random.uniform(-1, 1)
def hg_approximation(g=g):
q = np.random.uniform(0, 1)
g = np.random.uniform(-1, 1)
theta = (((1 + g*g - ((1 - g*g)/(1 - g + 2*g*q))**2)/(2*g)))
theta = gauss( 0, theta)
return theta, g
for i in range (5):
fig = plt.figure(figsize=(10,10))
thetas = []
for j in range (len(g)):
g_ = g[j]
theta = hg_approximation(g=g)
thetas.append(2.*theta)
plt.plot(g,theta,'r.', label='$\Theta$')
plt.legend(fontsize= 10)
plt.xlabel('The g factor')
plt.ylabel('Scattering angle $\Theta$')
plt.show()
Could you please help me to get unit test plots (5 figures ) that shows the changing of theta (y-axis) with changing the random number g (x-axis) ?

How to guess the numerical Solution for Mathieu's Equation

am trying to predict the exact solution for the mathieu's equation y"+(lambda - 2qcos(2x))y = 0. I have been able to get five eigenvalues for the equation using numerical approximation and I want to find for each eigenvalues a guessed exact solution. I would be greatfull if someone helps. Thank you. Below is one of the codes for the fourth Eigenvalue
from scipy.integrate import solve_bvp
import numpy as np
import matplotlib.pyplot as plt
Definition of Mathieu's Equation
q = 5.0
def func(x,u,p):
lambd = p[0]
# y'' + (lambda - 2qcos(2x))y = 0
ODE = [u[1],-(lambd - 2.0*q*np.cos(2.0*x))*u[0]]
return np.array(ODE)
Definition of Boundary conditions(BC)
def bc(ua,ub,p):
return np.array([ua[0]-1., ua[1], ub[1]])
A guess solution of the mathieu's Equation
def guess(x):
return np.cos(4*x-6)
Nx = 100
x = np.linspace(0, np.pi, Nx)
u = np.zeros((2,x.size))
u[0] = -x
res = solve_bvp(func, bc, x, u, p=[16], tol=1e-7)
sol = guess(x)
print res.p[0]
x_plot = np.linspace(0, np.pi, Nx)
u_plot = res.sol(x_plot)[0]
plt.plot(x_plot, u_plot, 'r-', label='u')
plt.plot(x, sol, color = 'black', label='Guess')
plt.legend()
plt.xlabel("x")
plt.ylabel("y")
plt.title("Mathieu's Equation for Guess$= \cos(3x) \quad \lambda_4 = %g$" % res.p )
plt.grid()
plt.show()
[Plot of the Fourth Eigenvalues][2]
To compute the first five eigenpairs, thus, pairs of eigenvalues and eigenfunctions, of the Mathieu's equation Y" + (λ − 2q cos(2x))y = 0, on the interval [0, π] with boundary conditions:
y'(0) = 0, and y'(π) = 0 when q = 5.
The solution is normalized so that y(0) = 1. Though all the initial values are known at x = 0, the problem requires finding a value for the parameters that allows the boundary condition y'(π) = 0 to be satisfied.
Therefore the guess or exact solution of Mathieu's equation is cos(k*x) where k ∈ ℕ.
from scipy.integrate import solve_bvp
import numpy as np
import matplotlib.pyplot as plt
q = 5.0
# Definition of Mathieu's Equation
def func(x,u,p):
lambd = p[0]
# y'' + (lambda - 2qcos(2x))y = 0 can be rewritten as u2'= - (lambda - 2qcos(2x))u1
ODE = [u[1],-(lambd - 2.0*q*np.cos(2.0*x))*u[0]]
return np.array(ODE)
# Definition of Boundary conditions(BC)
def bc(ua,ub,p):
return np.array([ua[0]-1., ua[1], ub[1]])
# A guess solution of the mathieu's Equation
def guess(x):
return np.cos(5*x) # for k=5
Nx = 100
x = np.linspace(0, np.pi, Nx)
u = np.zeros((2,x.size))
u[0] = -x # initial guess
res = solve_bvp(func, bc, x, u, p=[20], tol=1e-9)
sol = guess(x)
print res.p[0]
x_plot = np.linspace(0, np.pi, Nx)
u_plot = res.sol(x_plot)[0]
plt.plot(x_plot, u_plot, 'r-', label='u')
plt.plot(x, sol, linestyle='--', color='k', label='Guess')
plt.legend(loc='best')
plt.xlabel("x")
plt.ylabel("y")
plt.title("Mathieu's Equation $\lambda_5 = %g$" % res.p)
plt.grid()
plt.savefig('Eigenpair_5v1.png')
plt.show()
Solution of Mathieu Equation

emcee walkers burn in but then remain the same

I'm having an issue using emcee. Its a simple enough 3 parameter fit but occasionally (only has occurred in two scenarios so far despite much use) my walkers burn in just fine but then do not move (see figure below). The acceptance fraction reported is 0.
Has anyone else encountered this issue before? I have tried varying my initial conditions and number of walkers and iterations etc. This piece of code has been running well on very similar data sets. Its not a challenging parameter space and it seems unlikely that the walker would be getting "stuck".
Any ideas? I'm stumped... my walkers are lazy it seems...
Sample code below (and sample data file). This code + data file fail when I run it.
`
import numpy as n
import math
import pylab as py
import matplotlib.pyplot as plt
import scipy
from scipy.optimize import curve_fit
from scipy import ndimage
import pyfits
from scipy import stats
import emcee
import corner
import scipy.optimize as op
import matplotlib.pyplot as pl
from matplotlib.ticker import MaxNLocator
def sersic(x, B,r_s,m):
return B * n.exp(-1.0 * (1.9992*m - 0.3271) * ( (x/r_s)**(1.0/m) - 1.))
def lnprior(theta):
B,r_s,m, lnf = theta
if 0.0 < B < 500.0 and 0.5 < m < 10. and r_s > 0. and -10.0 < lnf < 1.0:
return 0.0
return -n.inf
def lnlike(theta, x, y, yerr): #"least squares"
B,r_s,m, lnf = theta
model = sersic(x,B, r_s, m)
inv_sigma2 = 1.0/(yerr**2 + model**2*n.exp(2*lnf))
return -0.5*(n.sum((y-model)**2*inv_sigma2 - n.log(inv_sigma2)))
def lnprob(theta, x, y, yerr):#kills based on priors
lp = lnprior(theta)
if not n.isfinite(lp):
return -n.inf
return lp + lnlike(theta, x, y, yerr)
profile=open("testprofile.dat",'r') #read in the data file
profilelines=profile.readlines()
x=n.empty(len(profilelines))
y=n.empty(len(profilelines))
yerr=n.empty(len(profilelines))
for i,line in enumerate(profilelines):
col=line.split()
x[i]=col[0]
y[i]=col[1]
yerr[i]=col[2]
# Find the maximum likelihood value.
chi2 = lambda *args: -2 * lnlike(*args)
result = op.minimize(chi2, [50,2.0,0.5,0.5], args=(x, y, yerr))
B_ml, rs_ml,m_ml, lnf_ml = result["x"]
print("""Maximum likelihood result:
B = {0}
r_s = {1}
m = {2}
""".format(B_ml, rs_ml,m_ml))
# Set up the sampler.
ndim, nwalkers = 4, 4000
pos = [result["x"] + 1e-4*n.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y, yerr))
# Clear and run the production chain.
print("Running MCMC...")
Niter = 2000 #2000
sampler.run_mcmc(pos, Niter, rstate0=n.random.get_state())
print("Done.")
# Print out the mean acceptance fraction.
af = sampler.acceptance_fraction
print "Mean acceptance fraction:", n.mean(af)
# Plot sampler chain
pl.clf()
fig, axes = pl.subplots(3, 1, sharex=True, figsize=(8, 9))
axes[0].plot(sampler.chain[:, :, 0].T, color="k", alpha=0.4)
axes[0].yaxis.set_major_locator(MaxNLocator(5))
axes[0].set_ylabel("$B$")
axes[1].plot(sampler.chain[:, :, 1].T, color="k", alpha=0.4)
axes[1].yaxis.set_major_locator(MaxNLocator(5))
axes[1].set_ylabel("$r_s$")
axes[2].plot(n.exp(sampler.chain[:, :, 2]).T, color="k", alpha=0.4)
axes[2].yaxis.set_major_locator(MaxNLocator(5))
axes[2].set_xlabel("step number")
fig.tight_layout(h_pad=0.0)
fig.savefig("line-time_test.png")
# plot MCMC fit
burnin = 100
samples = sampler.chain[:, burnin:, :3].reshape((-1, ndim-1))
B_mcmc, r_s_mcmc, m_mcmc = map(lambda v: (v[0]),
zip(*n.percentile(samples, [50],
axis=0)))
print("""MCMC result:
B = {0}
r_s = {1}
m = {2}
""".format(B_mcmc, r_s_mcmc, m_mcmc))
pl.close()
# Make the triangle plot.
burnin = 50
samples = sampler.chain[:, burnin:, :3].reshape((-1, ndim-1))
fig = corner.corner(samples, labels=["$B$", "$r_s$", "$m$"])
fig.savefig("line-triangle_test.png")
Here's a better result. I made the random initial samples not so close to the maximum likelihood value and run the chain for a lot more steps with fewer walkers/chains. Notice that I'm plotting the m parameter and not its exponential, as you did.
The mean acceptance fraction is ~0.48, and it took about 1 min to run in my laptop. You can of course add more steps and get an even better result.
import numpy as n
import emcee
import corner
import scipy.optimize as op
import matplotlib.pyplot as pl
from matplotlib.ticker import MaxNLocator
def sersic(x, B, r_s, m):
return B * n.exp(
-1.0 * (1.9992 * m - 0.3271) * ((x / r_s)**(1.0 / m) - 1.))
def lnprior(theta):
B, r_s, m, lnf = theta
if 0.0 < B < 500.0 and 0.5 < m < 10. and r_s > 0. and -10.0 < lnf < 1.0:
return 0.0
return -n.inf
def lnlike(theta, x, y, yerr): # "least squares"
B, r_s, m, lnf = theta
model = sersic(x, B, r_s, m)
inv_sigma2 = 1.0 / (yerr**2 + model**2 * n.exp(2 * lnf))
return -0.5 * (n.sum((y - model)**2 * inv_sigma2 - n.log(inv_sigma2)))
def lnprob(theta, x, y, yerr): # kills based on priors
lp = lnprior(theta)
if not n.isfinite(lp):
return -n.inf
return lp + lnlike(theta, x, y, yerr)
profile = open("testprofile.dat", 'r') # read in the data file
profilelines = profile.readlines()
x = n.empty(len(profilelines))
y = n.empty(len(profilelines))
yerr = n.empty(len(profilelines))
for i, line in enumerate(profilelines):
col = line.split()
x[i] = col[0]
y[i] = col[1]
yerr[i] = col[2]
# Find the maximum likelihood value.
chi2 = lambda *args: -2 * lnlike(*args)
result = op.minimize(chi2, [50, 2.0, 0.5, 0.5], args=(x, y, yerr))
B_ml, rs_ml, m_ml, lnf_ml = result["x"]
print("""Maximum likelihood result:
B = {0}
r_s = {1}
m = {2}
lnf = {3}
""".format(B_ml, rs_ml, m_ml, lnf_ml))
# Set up the sampler.
ndim, nwalkers = 4, 10
pos = [result["x"] + 1e-1 * n.random.randn(ndim) for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y, yerr))
# Clear and run the production chain.
print("Running MCMC...")
Niter = 50000
sampler.run_mcmc(pos, Niter, rstate0=n.random.get_state())
print("Done.")
# Print out the mean acceptance fraction.
af = sampler.acceptance_fraction
print("Mean acceptance fraction:", n.mean(af))
# Plot sampler chain
pl.clf()
fig, axes = pl.subplots(3, 1, sharex=True, figsize=(8, 9))
axes[0].plot(sampler.chain[:, :, 0].T, color="k", alpha=0.4)
axes[0].yaxis.set_major_locator(MaxNLocator(5))
axes[0].set_ylabel("$B$")
axes[1].plot(sampler.chain[:, :, 1].T, color="k", alpha=0.4)
axes[1].yaxis.set_major_locator(MaxNLocator(5))
axes[1].set_ylabel("$r_s$")
# axes[2].plot(n.exp(sampler.chain[:, :, 2]).T, color="k", alpha=0.4)
axes[2].plot(sampler.chain[:, :, 2].T, color="k", alpha=0.4)
axes[2].yaxis.set_major_locator(MaxNLocator(5))
axes[2].set_ylabel("$m$")
axes[2].set_xlabel("step number")
fig.tight_layout(h_pad=0.0)
fig.savefig("line-time_test.png")
# plot MCMC fit
burnin = 10000
samples = sampler.chain[:, burnin:, :3].reshape((-1, ndim - 1))
B_mcmc, r_s_mcmc, m_mcmc = map(
lambda v: (v[0]), zip(*n.percentile(samples, [50], axis=0)))
print("""MCMC result:
B = {0}
r_s = {1}
m = {2}
""".format(B_mcmc, r_s_mcmc, m_mcmc))
pl.close()
# Make the triangle plot.
burnin = 50
samples = sampler.chain[:, burnin:, :3].reshape((-1, ndim - 1))
fig = corner.corner(samples, labels=["$B$", "$r_s$", "$m$"])
fig.savefig("line-triangle_test.png")

How do you plot a line with two slopes using python

I am using the below codes to plot a line with two slopes as shown in the picture.The slope should should decline after certain limit [limit=5]. I am using vectorisation method to set the slope values.Is there any other method to set the slope values.Could anyone help me in this?
import matplotlib.pyplot as plt
import numpy as np
#Setting the condition
L=5 #Limit
m=1 #Slope
c=0 #Intercept
x=np.linspace(0,10,1000)
#Calculate the y value
y=m*x+c
#plot the line
plt.plot(x,y)
#Set the slope values using vectorisation
m[(x<L)] = 1.0
m[(x>L)] = 0.75
# plot the line again
plt.plot(x,y)
#Display with grids
plt.grid()
plt.show()
You may be overthinking the problem. There are two line segments in the picture:
From (0, 0) to (A, A')
From (A, A') to (B, B')
You know that A = 5, m = 1, so A' = 5. You also know that B = 10. Given that (B' - A') / (B - A) = 0.75, we have B' = 8.75. You can therefore make the plot as follows:
from matplotlib import pyplot as plt
m0 = 1
m1 = 0.75
x0 = 0 # Intercept
x1 = 5 # A
x2 = 10 # B
y0 = 0 # Intercept
y1 = y0 + m0 * (x1 - x0) # A'
y2 = y1 + m1 * (x2 - x1) # B'
plt.plot([x0, x1, x2], [y0, y1, y2])
Hopefully you see the pattern for computing y values for a given set of limits. Here is the result:
Now let's say you really did want to use vectorization for some obscure reason. You would want to compute all the y values up front and plot once, otherwise you will get weird results. Here are some modifications to your original code:
from matplotlib import pyplot as plt
import numpy as np
#Setting the condition
L = 5 #Limit
x = np.linspace(0, 10, 1000)
lMask = (x<=L) # Avoid recomputing this mask
# Compute a vector of slope values for each x
m = np.zeros_like(x)
m[lMask] = 1.0
m[~lMask] = 0.75
# Compute the y-intercept for each segment
b = np.zeros_like(x)
#b[lMask] = 0.0 # Already set to zero, so skip this step
b[~lMask] = L * (m[0] - 0.75)
# Compute the y-vector
y = m * x + b
# plot the line again
plt.plot(x, y)
#Display with grids
plt.grid()
plt.show()
Following your code, you should modify the main part like this:
x=np.linspace(0,10,1000)
m = np.empty(x.shape)
c = np.empty(x.shape)
m[(x<L)] = 1.0
c[x<L] = 0
m[(x>L)] = 0.75
c[x>L] = L*(1.0 - 0.75)
y=m*x+c
plt.plot(x,y)
Note that c needs to change as well for the line to be continuous. This is the result:

Categories