I'm trying to understand how to use the nfft method of Jake Vanderplas' nfft module. The example unfortunately isn't very illustrative as I try to parametrize everything based on just an input list of samples ([(time0, signal0), (time1, signal1), ...]):
import numpy as np
from nfft import nfft
# define evaluation points
x = -0.5 + np.random.rand(1000)
# define Fourier coefficients
N = 10000
k = - N // 2 + np.arange(N)
f_k = np.random.randn(N)
# non-equispaced fast Fourier transform
f = nfft(x, f_k)
I'm trying to compute f_k in an example where the samples are about 10 ms apart with 1 or 2 ms jitter in that interval.
The implementation documentation:
def nfft(x, f_hat, sigma=3, tol=1E-8, m=None, kernel='gaussian',
use_fft=True, truncated=True):
"""Compute the non-equispaced fast Fourier transform
f_j = \sum_{-N/2 \le k < N/2} \hat{f}_k \exp(-2 \pi i k x_j)
Parameters
----------
x : array_like, shape=(M,)
The locations of the data points. Each value in x should lie
in the range [-1/2, 1/2).
f_hat : array_like, shape=(N,)
The amplitudes at each wave number k = range(-N/2, N/2).
Where I'm stuck:
import numpy as np
from nfft import nfft
def compute_nfft(sample_instants, sample_values):
"""
:param sample_instants: `numpy.ndarray` of sample times in milliseconds
:param sample_values: `numpy.ndarray` of samples values
:return: Horizontal and vertical plot components as `numpy.ndarray`s
"""
N = len(sample_instants)
T = sample_instants[-1] - sample_instants[0]
x = np.linspace(0.0, 1.0 / (2.0 * T), N // 2)
y = 2.0 / N * np.abs(y[0:N // 2])
y = nfft(x, y)
return (x, y)
The example defines a variable f_k which is passed as nfft's f_hat argument.
According to the definition
f_j = \sum_{-N/2 \le k < N/2} \hat{f}_k \exp(-2 \pi i k x_j)
given, f_hat represents the time-domain signal at the specified sampling instants. In your case this simply corresponds to sample_values.
The other argument x of nfft are the actual time instants of those samples. You'd need to also provide those separately:
def compute_nfft(sample_instants, sample_values):
N = len(sample_instants)
T = sample_instants[-1] - sample_instants[0]
x = np.linspace(0.0, 1.0 / (2.0 * T), N // 2)
y = nfft(sample_instants, sample_values)
y = 2.0 / N * np.abs(y[0:N // 2])
return (x, y)
Related
I used the Gaussian fit with 3 gauss to adjust but datai but I utility data that sometimes my curve contains only two Gaussians in it not find the parameter remnants to use and but great an error is what there is a method that but allows to change with curve fit function use if two or three gaussians .
for my function main, i have this code :
FitGWPS = mainCurveFitGWPS(global_ws, period, All_Max_GWPS, DoupleDip)
and my code for fit is :
import numpy as np
from scipy.optimize import curve_fit
#Functions-----------------------------------------
#Gaussian function
def _1gaus(X,C,X_mean,sigma):
return C*np.exp(-(X-X_mean)**2/(2*sigma**2))
def _3gaus(x, amp1,cen1,sigma1, amp2,cen2,sigma2, amp3,cen3,sigma3):
return amp1*np.exp(-(x-cen1)**2/(2*sigma1**2)) +\
amp2*np.exp(-(x-cen2)**2/(2*sigma2**2)) + amp3*np.exp(-(x-
cen3)**2/(2*sigma3**2))
def ParamFit (Gws, P, Max, popt_Firstgauss):
#Calculating the Lorentzian PDF values given Gaussian parameters and random variableX
width=0
Amp = []
cen = []
wid = []
for j in range(len(Max-1)):
Amp.append(0.8 * (Gws[Max[j]])) # Amplitude
cen.append(P[Max[j]]) # Frequency
if j == 0 : wid.append(0.3 + width * 2.) # Width
else : wid.append(0.3 + popt_Firstgauss[2] * 2.)
return Amp,wid,cen
def mainCurveFitGWPS(global_ws_in, period_in, All_Max_GWPS, DoupleDip):
#Calculating the Gaussian PDF values given Gaussian parameters and random variable X
# For the first fit we calculate with function of the max values
mean = sum(period_in*(global_ws_in))/sum((global_ws_in ))
sigma = np.sqrt(sum((global_ws_in)*(period_in-mean)**2)/sum((global_ws_in)))
Cst = 1 / ( 2* np.pi * sigma)
width=0
Amp = 0.8 * (global_ws_in[All_Max_GWPS[0]]) # Amplitude
cen = period_in[All_Max_GWPS[0]] # Frequency
wid = 0.3 + width * 2. #Width
Amp = []
cen = []
wid = []
for j in range(len(All_Max_GWPS-1)):
Amp.append(0.8 * (global_ws_in[All_Max_GWPS[j]])) # Amplitude
cen.append(period_in[All_Max_GWPS[j]]) # Frequency
if j == 0 : wid.append(0.3 + width * 2.)
else : wid.append(0.3 + popt_gauss[2] * 2.)
#do the fit!
popt_gauss, pcov_gauss = curve_fit(_1gaus, period_in, global_ws_in, p0 = [Cst,
mean, sigma])
FitGauss = _1gaus(period_in, *popt_gauss)
#I use the center, amplitude, and sigma values which I used to create the fake
#data
popt_3gauss, pcov_3gauss = curve_fit(_3gaus, period_in, global_ws_in, p0=[Amp[0],
cen[0], wid[0],Amp[1], cen[1], wid[1],Amp[2], cen[2], wid[2]], maxfev =5000)
Fit3Gauss = _3gaus(period_in, *popt_3gauss)
return Fit3Gauss
for example picture :
and
I wanted to calculate the normalized cross-correlation function of two signals where "x" axes is the time delay and "y" axes is value of correlation between -1 and 1. so I decided to use scipy.
I use the command corr = signal.correlate(s1['Strain'], s2['Strain'], mode='full')
where s1['Strain'] and s2['Strain'] are the pandas dataframe values but it doesn't return the normalized function with "x" axes as time delay.
Here is example data
s1:
Strain
0 -1.587702e-22
1 -1.425868e-22
2 -1.174897e-22
3 -8.559119e-23
4 -4.949480e-23
. .
. .
. .
for s2 it looks similar. I knew the sampling of both datasets, it's 4096 kHz.
Thank for your help.
First of all to get normalized coefficient (such that as lag 0, we get the Pearson correlation):
divide both signals by their standard deviation
scale by the length of the signal over which the convolution is done (shortest signal)
out = correlate(x/np.std(x), y/np.std(y), 'full') / min(len(x), len(y))
Now for the lags, from the official documentation of correlate one can read that the full output of cross-correlation is given by:
z[k] = (x * y)(k - N + 1)
= \sum_{l=0}^{||x||-1}x_l y_{l-k+N-1}^{*}\]
Where * denotes the convolution, and k goes from 0 up to ||x|| + ||y|| - 2 precisely. N is max(len(x), len(y)).
The lags are denoted above as the argument of the convolution (x * y), so they range from 0 - N + 1 to ||x|| + ||y|| - 2 - N + 1 which is n - 1 with n=min(len(x), len(y)).
Also, by briefly looking at the source code, I think they swap x and y sometimes if convenient... (hence the min(len(x), len(y)) in the normalisation above. However this implies to change the start of our lags, therefore:
N = max(len(x), len(y))
n = min(len(x), len(y))
# if len(x) < (len(y):
lags = np.arange(-N + 1, n)
# else:
lags = np.arange(-n + 1, N)
Summary
Check this code on two time-series for which you want to plot the cross-correlation of:
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import correlate
def plot_xcorr(x, y):
"Plot cross-correlation (full) between two signals."
N = max(len(x), len(y))
n = min(len(x), len(y))
if N == len(y):
lags = np.arange(-N + 1, n)
else:
lags = np.arange(-n + 1, N)
c = correlate(x / np.std(x), y / np.std(y), 'full')
plt.plot(lags, c / n)
plt.show()
To calculate the time delay between two signals, we need to find the cross-correlation between two signals and find the argmax.
Assuming data_1 and data_2 are samples of two signals:
import numpy as np
import pandas as pd
correlation = np.correlate(data_1, data_2, mode='same')
delay = np.argmax(correlation) - int(len(correlation)/2)
I'm trying to fit a power-law to data which is in the double log scale. Therefore I've used the curve_fit(...) function from the scipy.optimize package.
To run the function I've implemented the following piece of code COR_coef[i] = curve_fit(lambda x, m: c * x ** m, x, COR_IFG[:, i])[0][0], to the best of my knowledge the curve_fit(...) should now correctly fit a power-law (being a straight line) to my data. However, for some reason, I just do not seem to get the fit right. See the attached picture for the data and its fit.
Some more context with regards to the minimum reproducible example (see below):
The code generates random noise for simulation purposes, this is done in the white_noise(...)
This random noise is than misaligned (in a for-loop with different fractions of misalignment according to the variable fractions_to_shift so the development of the power-law can be studied) and subtracted from the original noise to gain a residual signal
The residual signal is the signal the power-law is fitted to
The curve_fit(...) is applied in the sim_powerlaw_coefficient(...) function
I am aware of the fact that my residual signal shows some artifacts when the misalignment gets larger, unfortunately I don't know how to get rid of these artifacts.
MINIMUM REPRODUCIBLE EXAMPLE
import matplotlib.pyplot as plt
import numpy as np
import numpy.fft as fft
import numpy.random as rnd
from scipy.optimize import curve_fit
plt.style.use('seaborn-darkgrid')
rnd.seed(100) # to select a random seed for creating the "random" noise
grad = -5 / 3. # slope to use for every function
c = 1 # base parameter for the powerlaw
ylim = [1e-7, 30] # range for the double log plots of the powerfrequency domains
values_to_shift = [0, 2**-11, 2**-10, 2**-9, 2**-8, 2**-7, 2**-6, 2**-5, 2**-4, 2**-3, 2**-2, 2**-1, 2**0] # fractions of missalignment
def white_noise(n: int, N: int):
"""
- Creates a data set of white noise with size n, N;
- Filters this dataset with the corresponding slope;
This slope is usually equal to -5/3 or -2/3
- Makes sure the slope is equal to the requested slope in the double log scale.
#param n: size of random array
#param N: number of random arrays
#param slope: slope of the gradient
#return: white_noise, filtered white_noise and the original signal
"""
m = grad
x = np.linspace(1, n, n // 2)
slope_loglog = c * x ** m
whitenoise = rnd.randn(n // 2, N) + 1j * rnd.randn(n // 2, N)
whitenoise[0, :] = 0 # zero-mean noise
whitenoise_filtered = whitenoise * slope_loglog[:, np.newaxis]
whitenoise = 2 * np.pi * np.concatenate((whitenoise, whitenoise[0:1, :], np.conj(whitenoise[-1:0:-1, :])), axis=0)
whitenoise_filtered = 2 * np.pi * np.concatenate(
(whitenoise_filtered, whitenoise_filtered[0:1, :], np.conj(whitenoise_filtered[-1:0:-1, :])), axis=0)
whitenoise_signal = fft.ifft(whitenoise_filtered, axis=0)
whitenoise_signal = np.real_if_close(whitenoise_signal)
if np.iscomplex(whitenoise_signal).any():
print('Warning! whitenoise_signal is complex-valued!')
whitenoise_retransformed = fft.fft(whitenoise_signal, axis=0)
return whitenoise, whitenoise_filtered, whitenoise_signal, whitenoise_retransformed, slope_loglog
def sim_powerlaw_coefficient(n: int, N: int, show_powerlaw=0):
"""
#param n: Number of values in the IFG
#param N: Number of IFG's
#return: Returns the coefficient after subtraction of two IFG's
"""
master = white_noise(n, N)
slave = white_noise(n, N)
x = np.linspace(1, n, n // 2)
signal_IFG = master[2] - slave[2]
noise_IFG = np.abs(fft.fft(signal_IFG, axis=0))[0:n // 2, :]
for k in range(len(values_to_shift)):
shift = np.int(np.round(values_to_shift[k] * n, 0))
inp = signal_IFG.copy()
# the weather model is a shifted copy of the actual signal, to better understand the errors that are introduced.
weather_model = np.roll(inp, shift, axis=0)
WM_IFG = np.abs(fft.fft(weather_model, axis=0)[0:n // 2, :])
signal_corrected = signal_IFG - weather_model
COR_IFG = np.abs(fft.fft(signal_corrected, axis=0)[0:n // 2, :])
COR_coef = np.zeros(N)
for i in range(N):
COR_coef[i] = curve_fit(lambda x, m: c * x ** m, x, COR_IFG[:, i])[0][0]
plt.figure(figsize=(15, 10))
plt.title('Corrected IFG (combined - weather model)')
plt.loglog(COR_IFG, label='Corrected IFG')
plt.ylim(ylim)
plt.xlabel('log(k)')
plt.ylabel('log(P)')
plt.loglog(c * x ** COR_coef.mean(), '-.', label=f'COR powerlaw coef:{COR_coef.mean()}')
plt.legend(loc=0)
plt.tight_layout()
sim_powerlaw_coefficient(8192, 1, show_powerlaw=1)
I need to plot the following function using Python, numpy and matplotlib:
for the values of N = 5, 20 and 60.
I've created a list of odd numbers using:
def odd(n):
nums = []
for i in range(1, 2*n, 2):
nums.append(i)
return nums
But I don't know how to use this in a sigma function because I need to vary my x values and sum over the function for the range of odd(n).
If you want to plot (i.e. visualise) the function for some N, then the procedure is as follows:
Generate an array of x values. In this case, ranging from -pi to pi makes most sense.
Write a loop that computes one sin() at a time, and sum the result in a different array, which we call Psi.
Finally multiply the Psi by the constant 2/(N+1).
Plot the result
import numpy as np
import matplotlib.pyplot as plt
# x is 100 equally spaced points from -pi to pi, inclusive
x = np.linspace(-np.pi, np.pi, 100)
Psi = 0*x # now Psi is an array of zeros
N = 60
# second input of range is N+1 since our index n satisfies 1 <= n < N+1
# third input makes n increment by 2 each loop instead of the default 1
for n in range(1, N+1, 2):
Psi += -1**((n-1)/2) * np.sin(n*x)
Psi *= 2/(N+1)
plt.plot(x, Psi)
Code without pure Python loops:
def Psi(x, N=7):
"""Note: N should be odd """
_s = np.arange(1, int((N + 1) / 2) + 1)
return 2 * np.sum(np.where(_s % 2, 1, -1) * np.sin((2 * _s - 1) * x)) / (N + 1)
This code is without loops and should work for any value of x and N.
x must be an array or list with more than 1 element
import numpy as np
from numpy import matlib
import matplotlib.pyplot as plt
def psi(x,N):
n=np.arange(0,N,2)+1
sigma = matlib.repmat((-1)**((n-1)/2),len(x),1).T*np.sin(matlib.repmat(n,len(x),1).T*x)
PSI = (2/(N+1))*np.sum(sigma,axis=0)
return PSI
x=np.linspace(0,2*np.pi,50)
N=5
y = psi(x,N)
plt.plot(y)
Thank you for all of your constructive criticisim on my last post. I have made some changes, but alas my code is still not working and I can't figure out why. What happens when I run this version is that I get a runtime warning about invalid errors encountered in matmul.
My code is given as
from __future__ import division
import numpy as np
from scipy.linalg import eig
from scipy.linalg import toeplitz
def poldif(*arg):
"""
Calculate differentiation matrices on arbitrary nodes.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f at arbitrarily specified nodes. The
differentiation matrices can be computed with unit weights or
with specified weights.
Parameters
----------
x : ndarray
vector of N distinct nodes
M : int
maximum order of the derivative, 0 < M <= N - 1
OR (when computing with specified weights)
x : ndarray
vector of N distinct nodes
alpha : ndarray
vector of weight values alpha(x), evaluated at x = x_j.
B : int
matrix of size M x N, where M is the highest derivative required.
It should contain the quantities B[l,j] = beta_{l,j} =
l-th derivative of log(alpha(x)), evaluated at x = x_j.
Returns
-------
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on arbitrary nodes specified in the array
x. The nodes must be distinct but are, otherwise, arbitrary. The
matrices are constructed by differentiating N-th order Lagrange
interpolating polynomial that passes through the speficied points.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
This function is based on code by Rex Fuzzle
https://github.com/RexFuzzle/Python-Library
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
"""
if len(arg) > 3:
raise Exception('number of arguments is either two OR three')
if len(arg) == 2:
# unit weight function : arguments are nodes and derivative order
x, M = arg[0], arg[1]
N = np.size(x)
# assert M<N, "Derivative order cannot be larger or equal to number of points"
if M >= N:
raise Exception("Derivative order cannot be larger or equal to number of points")
alpha = np.ones(N)
B = np.zeros((M, N))
elif len(arg) == 3:
# specified weight function : arguments are nodes, weights and B matrix
x, alpha, B = arg[0], arg[1], arg[2]
N = np.size(x)
M = B.shape[0]
I = np.eye(N) # identity matrix
L = np.logical_or(I, np.zeros(N)) # logical identity matrix
XX = np.transpose(np.array([x, ] * N))
DX = XX - np.transpose(XX) # DX contains entries x(k)-x(j)
DX[L] = np.ones(N) # put 1's one the main diagonal
c = alpha * np.prod(DX, 1) # quantities c(j)
C = np.transpose(np.array([c, ] * N))
C = C / np.transpose(C) # matrix with entries c(k)/c(j).
Z = 1 / DX # Z contains entries 1/(x(k)-x(j)
Z[L] = 0 # eye(N)*ZZ; # with zeros on the diagonal.
X = np.transpose(np.copy(Z)) # X is same as Z', but with ...
Xnew = X
for i in range(0, N):
Xnew[i:N - 1, i] = X[i + 1:N, i]
X = Xnew[0:N - 1, :] # ... diagonal entries removed
Y = np.ones([N - 1, N]) # initialize Y and D matrices.
D = np.eye(N) # Y is matrix of cumulative sums
DM = np.empty((M, N, N)) # differentiation matrices
for ell in range(1, M + 1):
Y = np.cumsum(np.vstack((B[ell - 1, :], ell * (Y[0:N - 1, :]) * X)), 0) # diags
D = ell * Z * (C * np.transpose(np.tile(np.diag(D), (N, 1))) - D) # off-diags
D[L] = Y[N - 1, :]
DM[ell - 1, :, :] = D
return DM
def herdif(N, M, b=1):
"""
Calculate differentiation matrices using Hermite collocation.
Returns the differentiation matrices D1, D2, .. DM corresponding to the
M-th derivative of the function f, at the N Chebyshev nodes in the
interval [-1,1].
Parameters
----------
N : int
number of grid points
M : int
maximum order of the derivative, 0 < M < N
b : float, optional
scale parameter, real and positive
Returns
-------
x : ndarray
N x 1 array of Hermite nodes which are zeros of the N-th degree
Hermite polynomial, scaled by b
DM : ndarray
M x N x N array of differentiation matrices
Notes
-----
This function returns M differentiation matrices corresponding to the
1st, 2nd, ... M-th derivates on a Hermite grid of N points. The
matrices are constructed by differentiating N-th order Hermite
interpolants.
The M-th derivative of the grid function f is obtained by the matrix-
vector multiplication
.. math::
f^{(m)}_i = D^{(m)}_{ij}f_j
References
----------
..[1] B. Fornberg, Generation of Finite Difference Formulas on Arbitrarily
Spaced Grids, Mathematics of Computation 51, no. 184 (1988): 699-706.
..[2] J. A. C. Weidemann and S. C. Reddy, A MATLAB Differentiation Matrix
Suite, ACM Transactions on Mathematical Software, 26, (2000) : 465-519
..[3] R. Baltensperger and M. R. Trummer, Spectral Differencing With A
Twist, SIAM Journal on Scientific Computing 24, (2002) : 1465-1487
"""
if M >= N - 1:
raise Exception('number of nodes must be greater than M - 1')
if M <= 0:
raise Exception('derivative order must be at least 1')
x = herroots(N) # compute Hermite nodes
alpha = np.exp(-x * x / 2) # compute Hermite weights.
beta = np.zeros([M + 1, N])
# construct beta(l,j) = d^l/dx^l (alpha(x)/alpha'(x))|x=x_j recursively
beta[0, :] = np.ones(N)
beta[1, :] = -x
for ell in range(2, M + 1):
beta[ell, :] = -x * beta[ell - 1, :] - (ell - 1) * beta[ell - 2, :]
# remove initialising row from beta
beta = np.delete(beta, 0, 0)
# compute differentiation matrix (b=1)
DM = poldif(x, alpha, beta)
# scale nodes by the factor b
x = x / b
# scale the matrix by the factor b
for ell in range(M):
DM[ell, :, :] = (b ** (ell + 1)) * DM[ell, :, :]
return x, DM
def herroots(N):
"""
Compute roots of the Hermite polynomial of degree N
Parameters
----------
N : int
degree of the Hermite polynomial
Returns
-------
x : ndarray
N x 1 array of Hermite roots
"""
# Jacobi matrix
d = np.sqrt(np.arange(1, N))
J = np.diag(d, 1) + np.diag(d, -1)
# compute eigenvalues
mu = eig(J)[0]
# return sorted, normalised eigenvalues
# real part only since all roots must be real.
return np.real(np.sort(mu) / np.sqrt(2))
a = 1-1j
b = 2+0.2j
c1 = 0.34
c2 = 0.005
alpha1 = (4*c2/a)**0.25
alpha2 = b/2*a
Nx = 220;
# hermite differentiation matrices
[x,D] = herdif(Nx, 2, np.real(alpha1))
D1 = D[0,:]
D2 = D[1,:]
# integration weights
diff = np.diff(x)
#print(len(diff))
p = np.concatenate([np.zeros(1), diff])
q = np.concatenate([diff, np.zeros(1)])
w = (p + q)/2
Q = np.diag(w)
#Discretised operator
const = c1*np.diag(np.ones(len(x)))-c2*(np.diag(x)*np.diag(x))
#print(const)
A = a*D2 - b*D1 + const
##### Timestepping
tmax = 200
tmin = 0
dt = 1
n = (tmax - tmin)/dt
tvec = np.linspace(0,tmax,n, endpoint = True)
#(len(tvec))
q = np.zeros((Nx, len(tvec)),dtype=complex)
f = np.zeros((Nx, len(tvec)),dtype=complex)
q0 = np.ones(Nx)*10**4
q[:,0] = q0
#print(q[:,0])
#print(q0)
# qnew - qold = dt*Aqold + dt*N(qold,qold,qold)
# qnew - qold = dt*Aqnew - dt*N(qold,qold,qold)
# therefore qnew - qold = 0.5*dtAqold + 0.5*dt*Aqnew + dtN(qold,qold,qold)
# rearranging to give qnew( 1- 0.5Adt) = (1 + 0.5Adt) + dt N(qold,qold,qold)
from numpy.linalg import inv
inverted = inv(np.eye(Nx)-0.5*A*dt)
forqold = (np.eye(Nx) + 0.5*A*dt)
firstterm = np.matmul(inverted,forqold)
for t in range(0, len(tvec)-1):
nl = abs(np.square(q[:,t]))*q[:,t]
q[:,t+1] = np.matmul(firstterm,q[:,t]) - dt*np.matmul(inverted,nl)
where the hermitedifferentiation matrices can be found online and are in a different file. This code blows up after five interations, which I cannot understand as I don't see how it differs in the matlab found here https://www.bagherigroup.com/research/open-source-codes/
I would really appreciate any help.
Error in:
q[:,t+1] = inverted*forgold*np.array(q[:,t]) + inverted*dt*np.array(nl)
q[:, t+1] indexes a 2d array (probably not a np.matrix which is more MATLAB like). This indexing reduces the number of dimensions by 1, hence the (220,) shape in the error message.
The error message says the RHS is (220,220). That shape probably comes from inverted and forgold. np.array(q[:,t]) is 1d. Multiplying a (220,220) by a (220,) is ok, but you can't put that square array into a 1d slot.
Both uses of np.array in the error line are superfluous. Their arguments are already ndarray.
As for the loop, it may be necessary. It looks like q[:,t+1] is a function of q[:,t], a serial, rather than parallel operation. Those are harder to render as 'vectorized' (unless you can usecumsum` like operations).
Note that in numpy * is elementwise multiplication, the .* of MATLAB. np.dot and # are used for matrix multiplication.
q[:,t+1]= invert#q[:,t]
would work