Smoothing signal - convolution - python

I have an iterative model in Python which generates at signal using a function which contains a derivative. As the model iterates the signal becomes noisy - I suspect it may be an issue with computing the numerical derivative. I've tried to smooth this by applying a low-pass filter, convolving the noisy signal with a Gaussian kernel. I use the code snippet:
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(current, window, mode='same') / np.sum(window)
where current is my signal, and gaussian and convolve have been imported from scipy. This seems to give a slight improvement, and the first 2 or 3 iterations appear very smooth. However, after that the signal becomes extremely noisy again, despite the fact that the low-pass filter is positioned inside the iterative loop.
Can anyone suggest where I might be going wrong or how I could better tackle this problem? Thanks.
EDIT: As suggested I've included the code I'm using below. At 5 iterations the noise on the signal is clearly apparent.
import numpy as np
from scipy import special
import matplotlib.pyplot as plt
from scipy.integrate import odeint
from scipy.signal import convolve
from scipy.signal import gaussian
# Constants
B = 426400E-9 # tesla
R = 71723E3
Rkm = R / 1000.
Omega = 1.75e-4 #8.913E-4 # rads/s
period = (2. * np.pi / Omega) / 3600. # Gets period in hours
Bj = 2.0 * B
mdot = 1000.
sigmapstar = 0.05
# Create rhoe array
rho0 = 5.* R
rho1 = 100. * R
rhoe = np.linspace(rho0, rho1, 2.E5)
# Define flux function and z component of equatorial field strength
Fe = B * R**3 / rhoe
Bze = B * (R/rhoe)**3
def derivs(u, rhoe, p):
"""Computes the derivative"""
wOmegaJ = u
Bj, sigmapstar, mdot, B, R = p
# Compute the derivative of w/omegaJ wrt rhoe (**Fe and Bjz have been subbed)
dwOmegaJ = (((8.0*np.pi*sigmapstar*B**2 * (R**6)) / (mdot * rhoe**5)) \
*(1.0-wOmegaJ) - (2.*wOmegaJ/rhoe))
res = dwOmegaJ
return res
its = 5 # number of iterations to perform
i = 0
# Loop to iterate
while i < its:
# Define the initial condition of rigid corotation
wOmegaJ_0 = 1
params = [Bj, sigmapstar, mdot, B, R]
init = wOmegaJ_0
# Compute numerical solution to Hill eqn
u = odeint(derivs, init, rhoe, args=(params,))
wOmega = u[:,0]
# Calculate I_rho
i_rho = 8. * np.pi * sigmapstar * Omega * Fe * ( 1. - wOmega)
dx = rhoe[1] - rhoe[0]
differential = np.gradient(i_rho, dx)
jpara = 1. * differential / (4 * np.pi * rhoe * Bze )
jpari = 2. * B * para
# Remove infinity and NaN values)
jpari[~np.isfinite(jpari)] = 0.0
# Convolve to smooth curve
nw = 256
std = 40
window = gaussian(nw, std, sym=True)
filtered = convolve(jpari, window, mode='same') /np.sum(window)
jpari = filtered
# Pedersen conductivity as function of jpari
sigmapstar0 = 0.05
jstar = 0.01e-6
jstarstar = 0.25e-6
s1 = 0.1e6#0.1e6 # (Am^-2)^-1
s2 = 9.9e6 # (Am^-2)^-1
n = 8.
# Calculate news sigmapstar. Realistic conductivity
sigmapstarNew = sigmapstar0 + 0.5 * (s1 + s2/(1 + (jpari/jstarstar)**n)**(1./n)) * (np.sqrt(jpari**2 + jstar**2) + jpari)
sigmapstarNew = sigmapstarNew
diff = np.abs(sigmapstar - sigmapstarNew) / sigmapstar * 100
diff = max(diff)
sigmapstar = 0.5* sigmapstar + 0.5* sigmapstarNew # Weighted averaging
i += 1
print diff
# Plot jpari
ax = plt.subplot(111)
ax.plot(rhoe/R, jpari * 1e6)
ax.axhline(0, ls=':')
ax.set_xlabel(r'$\rho_e / R_{UCD}$')
ax.set_ylabel(r'$j_{\parallel i} $ / $ \mu$ A m$^{-2}$')
ax.set_xlim([0,80])
ax.set_ylim(-0.01,0.01)
plt.locator_params(nbins=5)
plt.draw()
plt.show()

Related

Error in implementation of Crank-Nicolson method applied to 1D TDSE?

This is more of a computational physics problem, and I've asked it on physics stack exchange, but no answers on there. This is, I suppose, a mix of the disciplines on here and there (and maybe even mathematics stack exchange), so finding the right place to post is a task in of itself apparently...
I'm attempting to use Crank-Nicolson scheme to solve the TDSE in 1D. The initial wave is a real Gaussian that has been normalised wrt its probability density. As the solution evolves, a depression grows in the central peak of the real part of the wave, and the imaginary part's central trough is perhaps a bit higher than I expect (image below).
Does this behaviour seem reasonable? I have searched around and not seen questions/figures that are similar. I've tested another person's code from Github and it exhibits the same behaviour, which makes me feel a bit better. But I still think the center peak should just decrease in height and increase in width. The likelihood of me getting a physics-based explanation is relatively low here I'd assume, but a computational-based explanation on errors I may have made is more likely.
I'm happy to give more information, for example my code, or the matrices used in the scheme, etc. Thanks in advance!
Here's a link to GIF of time evolution:
And the part of my code relevant to solving the 1D TDSE:
(pretty much the entire thing except the plotting)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
# Define function for norm.
def normf(dxc, uc, ic):
return sum(dxc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of position.
def xexpf(dxc, xc, uc, ic):
return sum(dxc * xc * np.square(np.abs(uc[ic, :])))
# Define function for expectation value of squared position.
def xexpsf(dxc, xc, uc, ic):
return sum(dxc * np.square(xc) * np.square(np.abs(uc[ic, :])))
# Define function for standard deviation.
def sdaf(xexpc, xexpsc, ic):
return np.sqrt(xexpsc[ic] - np.square(xexpc[ic]))
# Time t: t0 =< t =< tf. Have N steps at which to evaluate the CN scheme. The
# time interval is dt. decp: variable for plotting to certain number of decimal
# places.
t0 = 0
tf = 20
N = 200
dt = tf / N
t = np.linspace(t0, tf, num = N + 1, endpoint = True)
decp = str(dt)[::-1].find('.')
# Initialise array for filling with norm values at each time step.
norm = np.zeros(len(t))
# Initialise array for expectation value of position.
xexp = np.zeros(len(t))
# Initialise array for expectation value of squared position.
xexps = np.zeros(len(t))
# Initialise array for alternate standard deviation.
sda = np.zeros(len(t))
# Position x: -a =< x =< a. M is an even number. There are M + 1 total discrete
# positions, for the points to be symmetric and centred at x = 0.
a = 100
M = 1200
dx = (2 * a) / M
x = np.linspace(-a, a, num = M + 1, endpoint = True)
# The gaussian function u diffuses over time. sd sets the width of gaussian. u0
# is the initial gaussian at t0.
sd = 1
var = np.power(sd, 2)
mu = 0
u0 = np.sqrt(1 / np.sqrt(np.pi * var)) * np.exp(-np.power(x - mu, 2) / (2 * \
var))
u = np.zeros([len(t), len(x)], dtype = 'complex_')
u[0, :] = u0
# Normalise u.
u[0, :] = u[0, :] / np.sqrt(normf(dx, u, 0))
# Set coefficients of CN scheme.
alpha = dt * -1j / (4 * np.power(dx, 2))
beta = dt * 1j / (4 * np.power(dx, 2))
# Tridiagonal matrices Al and AR. Al to be solved using Thomas algorithm.
Al = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Al[i + 1, i] = alpha
Al[i, i] = 1 - (2 * alpha)
Al[i, i + 1] = alpha
# Corner elements for BC's.
Al[M, M], Al[0, 0] = 1 - alpha, 1 - alpha
Ar = np.zeros([len(x), len(x)], dtype = 'complex_')
for i in range (0, M):
Ar[i + 1, i] = beta
Ar[i, i] = 1 - (2 * beta)
Ar[i, i + 1] = beta
# Corner elements for BC's.
Ar[M, M], Ar[0, 0] = 1 - 2*beta, 1 - beta
# Thomas algorithm variables. Following similar naming as in Wiki article.
a = np.diag(Al, -1)
b = np.diag(Al)
c = np.diag(Al, 1)
NT = len(b)
cp = np.zeros(NT - 1, dtype = 'complex_')
for n in range(0, NT - 1):
if n == 0:
cp[n] = c[n] / b[n]
else:
cp[n] = c[n] / (b[n] - (a[n - 1] * cp[n - 1]))
d = np.zeros(NT, dtype = 'complex_')
dp = np.zeros(NT, dtype = 'complex_')
# Iterate over each time step to solve CN method. Maintain boundary
# conditions. Keep track of standard deviation.
for i in range(0, N):
# BC's.
u[i, 0], u[i, M] = 0, 0
# Find RHS.
d = np.dot(Ar, u[i, :])
for n in range(0, NT):
if n == 0:
dp[n] = d[n] / b[n]
else:
dp[n] = (d[n] - (a[n - 1] * dp[n - 1])) / (b[n] - (a[n - 1] * \
cp[n - 1]))
nc = NT - 1
while nc > -1:
if nc == NT - 1:
u[i + 1, nc] = dp[nc]
nc -= 1
else:
u[i + 1, nc] = dp[nc] - (cp[nc] * u[i + 1, nc + 1])
nc -= 1
norm[i] = normf(dx, u, i)
xexp[i] = xexpf(dx, x, u, i)
xexps[i] = xexpsf(dx, x, u, i)
sda[i] = sdaf(xexp, xexps, i)
# Fill in final norm value.
norm[N] = normf(dx, u, N)
# Fill in final position expectation value.
xexp[N] = xexpf(dx, x, u, N)
# Fill in final squared position expectation value.
xexps[N] = xexpsf(dx, x, u, N)
# Fill in final standard deviation value.
sda[N] = sdaf(xexp, xexps, N)

TypeError: unsupported operand type(s) for *: 'QuadMesh' and 'float' in Python

I have the following code. I know it's long and complex, however it takes 1.5 mins on my laptop to run. I would greatly appreciate any help towards finding the problem causing the error at the end - the plotting part.I didn't find anything on Google related to this error message:
TypeError: unsupported operand type(s) for : 'QuadMesh' and 'float'
from scipy import interpolate
from scipy.fft import fft, ifft
from scipy.constants import c, epsilon_0
import numpy as np, math
from matplotlib import pyplot as plt
lambda_0 = 800 * 10**(-9)
omega_0 = 2*np.pi*c / lambda_0
delta_lambda = 50.0 * 10**(-9)
delta_Tau = (1.47 * 10**(-3) * (lambda_0*10**9)**2 / (delta_lambda*10**9)) * 10**(-15) #
delta_omega_PFT = (4*np.log(2) / delta_Tau) # equivalent (equal) to: ((4*ln(2)) / (2*pi)) * (2*pi/delta_Tau) = 0.441 * (2*pi/delta_Tau)
F = 2.0 # focal length in meters, as from Liu 2017
def G(omegas):
# return ( np.sqrt(np.pi/(2*np.log(2))) * tau * np.exp( -(tau**2/(8*np.log(2))) * (omegas-omega_0)**2 ) ) # why is this here?
return np.exp( -(delta_Tau**2/(8*np.log(2))) * (omegas-omega_0)**2 )
xsi = 0.1 * (1.0 * 10**(-15)) / (1.0 * 10**(-3))
def phase_c(xs, omegas):
return ( xsi * np.reshape(xs, (xs.shape[0], 1)) * np.reshape((omegas-omega_0), (1, omegas.shape[0])) )
E0 = np.sqrt( (0.2*10**4 * 8 * np.sqrt(np.log(2))) / (delta_Tau*np.sqrt(np.pi)*c*epsilon_0/2.0) ) * np.sqrt(2*np.pi*np.log(2)) / delta_omega_PFT
def f(xi, omega): # the prefactors from Eq. (5) of Li et al. (2017) (the ones pre-multiplying the Fraunhoffer integral)
first = omega * np.exp(1j * (omega/c) * F) / (1j * 2*np.pi*c*F) # only function of omega. first is shape (omega.shape[0], )
omega = np.reshape(omega, (1, omega.shape[0]))
xi = np.reshape(xi, (xi.shape[0], 1))
second = np.exp(1j * (omega/c) * xi**2 / (2*F)) # second is shape (xi.shape[0], omega.shape[0]).
return (first * second) # returned is shape (xi.shape[0], omega.shape[0])
x0 = 0.0
delta_x = 196.0 # obtained from N=10, N_x=8*10^3, xi_max=10um, F=2m
xmin_PFT = x0 - delta_x #
xmax_PFT = x0 + delta_x #
num_xs_PFT = 8 * 10**3
xs_PFT = np.linspace(xmin_PFT, xmax_PFT, num_xs_PFT)
sampling_spacing_xs_PFT = np.true_divide( (xmax_PFT-xmin_PFT), num_xs_PFT)
num_omegas_focus = 5 * 10**2
maximum_time = 100.0 * 10**(-15)
N = math.ceil( (np.pi*num_omegas_focus)/(2*delta_omega_PFT*maximum_time) ) - 1
omega_max_focus = omega_0 + N*delta_omega_PFT
omega_min_focus = omega_0 - N*delta_omega_PFT
omegas_focus = np.linspace(omega_min_focus, omega_max_focus, num_omegas_focus) # shape (num_omegas_focus, )
sampling_spacing_omegas_focus = np.true_divide((omega_max_focus-omega_min_focus) , num_omegas_focus)
Es_x_omega = np.multiply( (E0 * G(omegas_focus)) ,
(np.exp(1j*phase_c(xs_PFT, omegas_focus))) # phase_c uses xsi, the PFT coefficient
)
# Es_x_omega holds across columns (vertically downwards) the x-dependence and across rows (horizontally) the omega-dependence
# Es_x_omega is shape (num_xs_PFT, num_omegas_focus)
Bprime_data_real = np.empty((Es_x_omega.shape[0], Es_x_omega.shape[1])) # this can be rewritten in a more Pythonic way
Bprime_data_imag = np.empty((Es_x_omega.shape[0], Es_x_omega.shape[1]))
for i in range(Es_x_omega.shape[1]): # for all the columns (all omegas)
# Perform FFT wrt x (so go from x to Kappa (a scaled spatial frequency))
intermediate = fft(Es_x_omega[:, i])
Bprime_data_real[:, i] = np.real(intermediate) * sampling_spacing_xs_PFT # multiplication by \Delta, see my docu above
Bprime_data_imag[:, i] = np.imag(intermediate) * sampling_spacing_xs_PFT # multiplication by \Delta, see my docu above
if i % 10000 == 0:
print("We have done fft number {}".format(i) + " out of {}".format(Es_x_omega.shape[1]) + "ffts")
# Bprime is function of (Kappa, omega): across rows the omega dependence, across columns the Kappa dependence.
# Get the Kappas:
returned_freqs = np.fft.fftfreq(num_xs_PFT, sampling_spacing_xs_PFT) # shape (num_xs_PFT, )
Kappas_ugly = 2*np.pi * returned_freqs # shape (num_xs_PFT, ), but unordered in terms of the magnitude of the values! see https://numpy.org/doc/stable/reference/generated/numpy.fft.fftfreq.html
Kappas_pretty = 2*np.pi * np.fft.fftshift(returned_freqs)
indices = (Kappas_ugly == Kappas_pretty[:, None]).argmax(1) # shape (num_xs_PFT, )
indices = indices.reshape((indices.shape[0], 1)) # needed for adapting Dani Mesejo's answer: he reordered based on 1D slices laid horizontally, here I reorder based on 1D slices laid vertically.
# see my notebook for visuals, 22-23 Nov 2021
hold_real = Bprime_data_real.shape[1]
hold_imag = Bprime_data_imag.shape[1]
Bprime_data_real_pretty = np.take_along_axis(Bprime_data_real, np.tile(indices, (1, hold_real)), axis=0) # adapted from Dani Mesejo's answer
Bprime_data_imag_pretty = np.take_along_axis(Bprime_data_imag, np.tile(indices, (1, hold_imag)), axis=0) # adapted from Dani Mesejo's answer
print(Bprime_data_real_pretty.shape) # shape (num_xs_PFT, num_omegas_focus), similarly for Bprime_data_imag_pretty
Bprime_real = interpolate.RectBivariateSpline(Kappas_pretty, omegas_focus, Bprime_data_real_pretty) # this CTOR creates an object faster (which can also be queried faster)
Bprime_imag = interpolate.RectBivariateSpline(Kappas_pretty, omegas_focus, Bprime_data_imag_pretty) # than interpolate.interp2d() does.
print("We have the interpolators!")
# Prepare for the aim: plot E versus time (horizontal axis) and xi (vertical axis).
xi_min = -5.0 * 10**(-6) # um
xi_max = 5.0 * 10**(-6) # um
num_xis = 5000
xis = np.linspace(xi_min, xi_max, num_xis)
print("We are preparing now!")
Es_Kappa_omega_without_prefactor = np.empty((xis.shape[0], omegas_focus.shape[0]), dtype=complex)
for j in range(Es_Kappa_omega_without_prefactor.shape[0]): # for each row
for i in range(Es_Kappa_omega_without_prefactor.shape[1]): # for each column
Es_Kappa_omega_without_prefactor[j, i] = Bprime_real(omegas_focus[i]*xis[j] /(c*F), omegas_focus[i]) + 1j*Bprime_imag(omegas_focus[i]*xis[j] /(c*F), omegas_focus[i])
if ((i + j*Es_Kappa_omega_without_prefactor.shape[1]) % 30000 == 0):
print("We have done iter number {}".format(i + j*Es_Kappa_omega_without_prefactor.shape[1])
+ " out of {}".format(Es_Kappa_omega_without_prefactor.shape[0] * Es_Kappa_omega_without_prefactor.shape[1]) + " iterations in querying the interpolators")
Es_Kappa_omega = np.multiply( f(xis, omegas_focus), # f(xis, omegas_focus) is shape (xis.shape[0], omegas_focus.shape[0])
Es_Kappa_omega_without_prefactor # Es_Kappa_omega_without_prefactor is shape (xis.shape[0], omegas_focus.shape[0])
) # the obtained variable is shape (xis.shape[0], omegas_focus.shape[0])
# Do IFT of Es_Kappa_omega w.r.t. omega to go from FD (omega) to TD (time t).
Es_Kappa_time = np.empty_like(Es_Kappa_omega, dtype=complex) # shape (xis.shape[0], omegas_focus.shape[0])
# Along columns (so vertically) the xi dependence, along rows (horizontally), the omega dependence
for i in range(Es_Kappa_omega.shape[0]): # for each row (for each xi)
Es_Kappa_time[i, :] = ifft(Es_Kappa_omega[i, :]) * (sampling_spacing_omegas_focus/(2*np.pi)) * num_omegas_focus # 1st multiplication is by Delta, 2nd multiplication is by N
if i % 10000 == 0:
print("We have done ifft number {}".format(i) + " out of a total of {}".format(Es_Kappa_omega.shape[0]) + " iffts")
returned_times_ugly = np.fft.fftfreq(num_omegas_focus, d=(sampling_spacing_omegas_focus/(2*np.pi))) # shape (num_omegas_focus, )
returned_times_pretty = np.fft.fftshift(returned_times_ugly) # order the returned "frequencies" (here "frequencies" = times because it's IFT (so from FD to TD))
indices = (returned_times_ugly == returned_times_pretty[:, None]).argmax(1)
Es_Kappa_time = np.take_along_axis(Es_Kappa_time, np.tile(indices, (Es_Kappa_time.shape[0], 1)), axis=1) # this is purely Dani Mesejo's answer
returned_times_pretty_mesh, xis_mesh = np.meshgrid(returned_times_pretty, xis)
fig, ax = plt.subplots()
c = ax.pcolormesh(returned_times_pretty_mesh, xis_mesh, np.real(Es_Kappa_time), cmap='viridis')
fig.colorbar(c, ax=ax, label=r'$[V/m]$')
ax.set_xlabel("t [s]")
ax.set_ylabel("xi [m]")
plt.show()
fig, ax = plt.subplots()
ax.imshow(np.multiply(np.square(np.real(Es_Kappa_time)), (c*epsilon_0)), cmap='viridis')
ax.set_xlabel("t [s]")
ax.set_ylabel("xi [m]")
plt.show()
I have tried many forms to be introduced in the plotting part. It fails with:
c = ax.pcolormesh(returned_times_pretty_mesh, xis_mesh, (c*epsilon_0)*np.real(Es_Kappa_time)**2, cmap='viridis')
fig.colorbar(c, ax=ax, label=r'$[V/m]$')
Fails with:
160 fig, ax = plt.subplots()
--> 161 ax.imshow(np.multiply(np.real(Es_Kappa_time)**2, (c*epsilon_0)), cmap='viridis')
I ran out of ideas of what I might introduce there.
Thank you!
It looks like you're reassigning c to the return value from ax.pcolormesh(...) after you import c from scipy.constants.

How to correctly solve 1D wave equation to get displacement profile (periodic Boundary Condition problem)?

I'm trying to solve a 1D wave equation for the pile with periodic BC (periodic load).
I'm pretty sure about my discretization formulas. The only thing I'm not sure about is the periodic BC and time (t) in there ==> sin(omega*t).
When I set it up as it is right now, it's giving me a weird displacement profile. However, if I set it up to be sin(omega*1) or sin(omega*2),... etc, it resembles a sine wave, but it basically means that sin(omega*t), i.e. sin(2*pi*f*t), is equal to 0 when t is an integer value..
I code everything up in Jupyter Notebook together with the visualization part, but the solution is nowhere near the propagating sine wave.
Here is the relevant Python code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.interpolate import interp1d
def oned_wave(L, dz, T, p0, Ep, ro, f):
"""Solve u_tt=c^2*u_xx on (0,L)x(0,T]"""
"""Assume C = 1"""
p = p0
E = Ep
ro = ro
omega = 2*np.pi*f
c = np.sqrt(E/ro)
C = 1 # Courant number
Nz = int(round(L/dz))
z = np.linspace(0, L, Nz+1) # Mesh points in space
dt = dz/c # Time step based on Courant Number
Nt = int(round(T/dt))
t = np.linspace(0, Nt*dt, Nt+1) # Mesh points in time
C2 = C**2 # Help variable in the scheme
# Make sure dz and dt are compatible with z and t
dz = z[1] - z[0]
dt = t[1] - t[0]
w = np.zeros(Nz+1) # Solution array at new time level
w_n = np.zeros(Nz+1) # Solution at 1 time level back
w_nm1 = np.zeros(Nz+1) # Solution at 2 time levels back
# Set initial condition into w_n
for i in range(0,Nz+1):
w_n[i] = 0
result_matrix = w_n[:] # Solution matrix where each row is displacement at given time step
# Special formula for first time step
for i in range(1, Nz):
w[i] = 0.5*C2 * w_n[i-1] + (1 - C2) * w_n[i] + 0.5*C2 * w_n[i+1]
# Set BC
w[0] = (1 - C2) * w_n[i] + C2 * w_n[i+1] - C2*dz*((p*np.sin(omega*dt))/E) # this is where, I think, the mistake is: sin(omega*t)
w[Nz] = 0
result_matrix = np.vstack((result_matrix, w)) # Append a row to the solution matrix
w_nm1[:] = w_n; w_n[:] = w # Switch variables before next step
for n in range(1, Nt):
# Update all inner points at time t[n+1]
for i in range(1, Nz):
w[i] = - w_nm1[i] + C2 * w_n[i-1] + 2*(1 - C2) * w_n[i] + C2 * w_n[i+1]
# Set BC
w[0] = - w_nm1[i] + 2*(1 - C2) * w_n[i] + 2*C2 * w_n[i+1] - 2*dz*((p*np.sin(omega*(dt*n)))/E) # this is where, I think, the mistake is: sin(omega*t)
w[Nz] = 0
result_matrix = np.vstack((result_matrix, w)) # Append a row to the solution matrix
w_nm1[:] = w_n; w_n[:] = w # Switch variables before next step
return result_matrix
My Jupyter Notebook document.

BER-SNR curve in Rayleigh channel using BPSK

I am trying to generate BER-SNR curve in Rayleigh channel using BPSK modulation. Can anyone help me to find what is wrong with the python code? The theoretical BER differs from the simulated BER. The red dotted line is theoretical and the blue one is simulated.
import math as mt
import numpy as np
import matplotlib.pyplot as plt
import numpy.random as nr
from scipy.linalg import toeplitz
# ------------Simulation parameters------------------------
N = 500 #No.of symbols
H_n = 10 #No.of channel elements
h_ray = np.zeros((H_n, 1), dtype = complex)
# ---------------------------------------------------------
EbN0dB = np.array(range(0, 11))
itr = len(EbN0dB)
ber = [None] * itr
theoreticalBER = [None] * itr
# -----------------------------Transmitter----------------------------
s = 2 * (nr.rand(N, 1) >= 0.5) - 1 # BPSK
# ------------------------------Channel-------------------------------
h_ray = (1/mt.sqrt(2)) * (nr.randn(H_n, 1) + 1j * nr.randn(H_n, 1))
# Rayleigh fading channel
z = np.zeros((N-1,1))
c = np.concatenate((h_ray, z), axis=0)
r = np.concatenate(([h_ray[0]], np.zeros((1, N-1))), axis=1)
h_ray_y=toeplitz(c, r) # Toeplitz matrix instead of convolution
y_ray = np.dot(h_ray_y, s)
for n in range(itr):
EbN0dB_n = EbN0dB[n]
EbN0 = 10.0 ** (-EbN0dB_n/ 20.0)
noise = 1/mt.sqrt(2) * (nr.randn(N+H_n-1) + 1j * nr.randn(N+H_n-1))
r_ray = y_ray + EbN0 * noise
# --------------------------Receiver------------------------------
r_t = np.dot(np.dot(np.linalg.pinv(np.dot(h_ray_y.T, h_ray_y)), h_ray_y.T), r_ray) # (H.T * H)^(-1)*H.T*received
r_d = 2 * (np.real(r_t) >= 0) - 1
errors = (s != r_d).sum()
ber[n] = 1.0 * errors / N
EbN0_th = 10.0 ** (EbN0dB_n / 10.0)
theoreticalBER[n] = 0.5 *(1 - mt.sqrt(EbN0_th/ (1 + EbN0_th)))
ber=np.asanyarray(ber)
#-------------------------------------------------------------------------
plt.plot(EbN0dB, np.log10(ber), '--bo')
plt.plot(EbN0dB, np.log10(theoreticalBER), 'ro')
plt.xlabel('EbN0(dB)')
plt.ylabel('BER')
plt.grid(True)
plt.title('BPSK - Rayleigh')
plt.show()

Why does my windowed-sinc function have non-linear phase?

Similar to many tutorials on the web, I've tried implementing a windowed-sinc lowpass filter using the following python functions:
def black_wind(w):
''' blackman window of width w'''
samps = np.arange(w)
return (0.42 - 0.5 * np.cos(2 * np.pi * samps/ (w-1)) + 0.08 * np.cos(4 * np.pi * samps/ (w-1)))
def lp_win_sinc(tw, fc, n):
''' lowpass sinc impulse response
Parameters:
tw = approximate transition width [fraction of nyquist freq]
fc = cutoff freq [fraction of nyquest freq]
n = length of output.
Returns:
s = impulse response of windowed-sinc filter appended zero-padding
to make len(s) = n
'''
m = int(np.ceil( 4./tw / 2) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * black_wind(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return s
For input: 'tw = 0.05', 'fc = 0.2', 'n = 6000', the magnitude of the fft seems reasonable.
tw = 0.05
fc = 0.2
n = 6000
lp = lp_win_sinc(tw, fc, n)
f_lp = np.fft.rfft(lp)
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.abs(f_lp))
magnitude of lowpass filter response
however, the phase is non-linear above ~fc.
plt.figure()
x = np.linspace(0, 0.5, len(f_lp))
plt.plot(x, np.unwrap(np.angle(f_lp)))
phase of lowpass filter response
Given the symmetry of the non-zero-padded portion of the impulse response, I would expect the resulting phase to be linear. Can someone explain what is going on? Perhaps I'm using a numpy function incorrectly, or maybe my expectations are incorrect. I'm very grateful for any help.
***********************EDIT***********************
based on some of the helpful comments to this question and some more work, I wrote a function that produces zero phase delay and is therefore a bit easier to interpret the np.angle() results.
def lp_win_sinc(tw, fc, n):
m = int(np.ceil( 2./tw) * 2)
samps = np.arange(m+1)
shift = samps - m/2
shift[m/2] = 1
h = np.sin(2 * np.pi * fc * shift)/shift
h[m/2] = 2 * np.pi * fc
h = h * np.blackman(m+1)
h = h / h.sum()
s = np.zeros(n)
s[:len(h)] = h
return np.roll(s, -m/2)
The main change here is using np.roll() to place the line of symmetry at t=0.
The magnitudes in the stop band are crossing zero. The phase of the coefficient after a zero crossing will jump by 180 degrees, which is confusing np.angle()/np.unwrap(). -1*180° = 1*0°
The phase as shown in your graph is in fact linear. It's a constant slope in the passband, corresponding to a constant delay in the time domain. It's a much steeper slope, which renders as wrapping around at 2pi boundaries, in the stopband. But the value of the phase in the stopband is not particularly important since those frequencies aren't going to come through the filter anyway.

Categories