Optimizing Python function with Parakeet - python

I need this function to be optimized as I am trying to make my OpenGL simulation run faster. I want to use Parakeet, but I can't quite understand in what way I would need to modify the code below in order to do so. Can you see what I should do?
def distanceMatrix(self,x,y,z):
" ""Computes distances between all particles and places the result in a matrix such that the ij th matrix entry corresponds to the distance between particle i and j"" "
xtemp = tile(x,(self.N,1))
dx = xtemp - xtemp.T
ytemp = tile(y,(self.N,1))
dy = ytemp - ytemp.T
ztemp = tile(z,(self.N,1))
dz = ztemp - ztemp.T
# Particles 'feel' each other across the periodic boundaries
if self.periodicX:
dx[dx>self.L/2]=dx[dx > self.L/2]-self.L
dx[dx<-self.L/2]=dx[dx < -self.L/2]+self.L
if self.periodicY:
dy[dy>self.L/2]=dy[dy>self.L/2]-self.L
dy[dy<-self.L/2]=dy[dy<-self.L/2]+self.L
if self.periodicZ:
dz[dz>self.L/2]=dz[dz>self.L/2]-self.L
dz[dz<-self.L/2]=dz[dz<-self.L/2]+self.L
# Total Distances
d = sqrt(dx**2+dy**2+dz**2)
# Mark zero entries with negative 1 to avoid divergences
d[d==0] = -1
return d, dx, dy, dz
From what I can tell, Parakeet should be able to use the above function without modifications - it only uses Numpy and math. But, I always get the following error when calling the function from the Parakeet jit wrapper:
AssertionError: Unsupported function: <bound method Particles.distanceMatrix of <particles.Particles instance at 0x04CD8E90>>

Parakeet is still young, its NumPy support is incomplete, and your code touches on several features that don't yet work.
1) You're wrapping a method, while Parakeet so far only knows how to deal with functions. The common workaround is to make a #jit wrapped helper function and have your method call into that with all of the required member data. The reason that methods don't work is that it's non-trivial to assign a meaningful type to 'self'. It's not impossible, but tricky enough that methods won't make their way into Parakeet until lower hanging fruit are plucked. Speaking of low-hanging fruit...
2) Boolean indexing. Not yet implemented but will be in the next release.
3) np.tile: Also doesn't work, will also probably be in the next release. If you want to see which builtins and NumPy library functions will work, take a look at Parakeet's mappings module.
I rewrote your code to be a little friendlier to Parakeet:
#jit
def parakeet_dist(x, y, z, L, periodicX, periodicY, periodicZ):
# perform all-pairs computations more explicitly
# instead of tile + broadcasting
def periodic_diff(x1, x2, periodic):
diff = x1 - x2
if periodic:
if diff > (L / 2): diff -= L
if diff < (-L/2): diff += L
return diff
dx = np.array([[periodic_diff(x1, x2, periodicX) for x1 in x] for x2 in x])
dy = np.array([[periodic_diff(y1, y2, periodicY) for y1 in y] for y2 in y])
dz = np.array([[periodic_diff(z1, z2, periodicZ) for z1 in z] for z2 in z])
d= np.sqrt(dx**2 + dy**2 + dz**2)
# since we can't yet use boolean indexing for masking out zero distances
# have to fall back on explicit loops instead
for i in xrange(len(x)):
for j in xrange(len(x)):
if d[i,j] == 0: d[i,j] = -1
return d, dx, dy, dz
On my machine this runs only ~3x faster than NumPy for N = 2000 (0.39s for NumPy vs. 0.14s for Parakeet). If I rewrite the array traversals to use loops more explicitly then the performance goes up to ~6x faster than NumPy (Parakeet runs in ~0.06s):
#jit
def loopy_dist(x, y, z, L, periodicX, periodicY, periodicZ):
N = len(x)
dx = np.zeros((N,N))
dy = np.zeros( (N,N) )
dz = np.zeros( (N,N) )
d = np.zeros( (N,N) )
def periodic_diff(x1, x2, periodic):
diff = x1 - x2
if periodic:
if diff > (L / 2): diff -= L
if diff < (-L/2): diff += L
return diff
for i in xrange(N):
for j in xrange(N):
dx[i,j] = periodic_diff(x[j], x[i], periodicX)
dy[i,j] = periodic_diff(y[j], y[i], periodicY)
dz[i,j] = periodic_diff(z[j], z[i], periodicZ)
d[i,j] = dx[i,j] ** 2 + dy[i,j] ** 2 + dz[i,j] ** 2
if d[i,j] == 0: d[i,j] = -1
else: d[i,j] = np.sqrt(d[i,j])
return d, dx, dy, dz
With a little creative rewriting you can also get the above code running in Numba, but it only goes ~1.5x faster than NumPy (0.25 seconds). The compile times were Parakeet w/ comprehensions: 1 second, Parakeet w/ loops: .5 seconds, Numba w/ loops: 0.9 seconds.
Hopefully the next few releases will enable more idiomatic use of NumPy library functions, but for now comprehensions or loops are often the way to go.

Related

Numba parallel code slower than its sequential counterpart

I'm new to Numba and I'm trying to implement an old Fortran code in Python using Numba (version 0.54.1), but when I add parallel = True the program actually slows down. My program is very simple: I change the positions x and y in a L x L grid and for each position in the grid I perform a summation
import numpy as np
import numba as nb
#nb.njit(parallel=True)
def lyapunov_grid(x_grid, y_grid, k, N):
L = len(x_grid)
lypnv = np.zeros((L, L))
for ii in nb.prange(L):
for jj in range(L):
x = x_grid[ii]
y = y_grid[jj]
beta0 = 0
sumT11 = 0
for j in range(N):
y = (y - k*np.sin(x)) % (2*np.pi)
x = (x + y) % (2*np.pi)
J = np.array([[1.0, -k*np.cos(x)], [1.0, 1.0 - k*np.cos(x)]])
beta = np.arctan((-J[1,0]*np.cos(beta0) + J[1,1]*np.sin(beta0))/(J[0,0]*np.cos(beta0) - J[0,1]*np.sin(beta0)))
T11 = np.cos(beta0)*(J[0,0]*np.cos(beta) - J[1,0]*np.sin(beta)) - np.sin(beta0)*(J[0,1]*np.cos(beta) - J[1,1]*np.sin(beta))
sumT11 += np.log(abs(T11))/np.log(2)
beta0 = beta
lypnv[ii, jj] = sumT11/N
return lypnv
# Compile
_ = lyapunov_grid(np.linspace(0, 1, 10), np.linspace(0, 1, 10), 1, 10)
# Parameters
N = int(1e3)
L = 128
pi = np.pi
k = 1.5
# Limits of the phase space
x0 = -pi
xf = pi
y0 = -pi
yf = pi
# Grid positions
x = np.linspace(x0, xf, L, endpoint=True)
y = np.linspace(y0, yf, L, endpoint=True)
lypnv = lyapunov_grid(x, y, k, N)
With parallel=False it takes about 8s to run, however with parallel=True it takes about 14s. I also tested with another code from https://github.com/animator/mandelbrot-numba and in this case the parallelization works.
import math
import numpy as np
import numba as nb
WIDTH = 1000
MAX_ITER = 1000
#nb.njit(parallel=True)
def mandelbrot(width, max_iter):
pixels = np.zeros((width, width, 3), dtype=np.uint8)
for y in nb.prange(width):
for x in range(width):
c0 = complex(3.0*x/width - 2, 3.0*y/width - 1.5)
c = 0
for i in range(1, max_iter):
if abs(c) > 2:
log_iter = math.log(i)
pixels[y, x, :] = np.array([int(255*(1+math.cos(3.32*log_iter))/2),
int(255*(1+math.cos(0.774*log_iter))/2),
int(255*(1+math.cos(0.412*log_iter))/2)],
dtype=np.uint8)
break
c = c * c + c0
return pixels
# compile
_ = mandelbrot(WIDTH, 10)
calcpixels = mandelbrot(WIDTH, MAX_ITER)
One main issue is that the second function call compile the function again. Indeed, the types of the provided arguments change: in the first call the third argument is an integer (int transformed to a np.int_) while in the second call the third argument (k) is a floating point number (float transformed to a np.float64). Numba recompiles the function for different parameter types because they are deduced from the type of the arguments and it does not know you want to use a np.float64 type for the third argument (since the first time the function is compiled with for a np.int_ type). One simple solution to fix the problem is to change the first call to:
_ = lyapunov_grid(np.linspace(0, 1, 10), np.linspace(0, 1, 10), 1.0, 10)
However, this is not a robust way to fix the problem. You can specify the parameter types to Numba so it will compile the function at declaration time. This also remove the need to artificially call the function (with useless parameters).
#nb.njit('float64[:,:](float64[::1], float64[::1], float64, float64)', parallel=True)
Note that (J[0,0]*np.cos(beta0) - J[0,1]*np.sin(beta0)) is zero the first time resulting in a division by 0.
Another main issue comes from the allocations of many small arrays in the loop causing a contention of the standard allocator (see this post for more information). While Numba could theoretically optimize it (ie. replace the array with local variables), it actually does not, resulting in a huge slowdown and a contention. Hopefully, in your case, you do not need to actually create the array. At last, you can create it only in the encompassing loop and modify it in the innermost loop. Here is the optimized code:
#nb.njit('float64[:,:](float64[::1], float64[::1], float64, float64)', parallel=True)
def lyapunov_grid(x_grid, y_grid, k, N):
L = len(x_grid)
lypnv = np.zeros((L, L))
for ii in nb.prange(L):
J = np.ones((2, 2), dtype=np.float64)
for jj in range(L):
x = x_grid[ii]
y = y_grid[jj]
beta0 = 0
sumT11 = 0
for j in range(N):
y = (y - k*np.sin(x)) % (2*np.pi)
x = (x + y) % (2*np.pi)
J[0, 1] = -k*np.cos(x)
J[1, 1] = 1.0 - k*np.cos(x)
beta = np.arctan((-J[1,0]*np.cos(beta0) + J[1,1]*np.sin(beta0))/(J[0,0]*np.cos(beta0) - J[0,1]*np.sin(beta0)))
T11 = np.cos(beta0)*(J[0,0]*np.cos(beta) - J[1,0]*np.sin(beta)) - np.sin(beta0)*(J[0,1]*np.cos(beta) - J[1,1]*np.sin(beta))
sumT11 += np.log(abs(T11))/np.log(2)
beta0 = beta
lypnv[ii, jj] = sumT11/N
return lypnv
Here is the results on a old 2-core machine (with 4 hardware threads):
Original sequential: 15.9 s
Original parallel: 11.9 s
Fix-build sequential: 15.7 s
Fix-build parallel: 10.1 s
Optimized sequential: 2.73 s
Optimized parallel: 0.94 s
The optimized implementation is much faster than the others. The parallel optimized version scale very well compared than the original one (2.9 times faster than the sequential one). Finally, the best version is about 12 times faster than the original parallel version. I expect a much faster computation on a recent machine with many more cores.

Usage of parallel option in numba.jit decoratior makes function give wrong result

Given two opposite corners of a rectangle (x1, y1) and (x2, y2) and two radii r1 and r2, find the ratio of points that lie between the circles defined by the radii r1 and r2 to the total number of points in the rectangle.
Simple NumPy approach:
def func_1(x1,y1,x2,y2,r1,r2,n):
x11,y11 = np.meshgrid(np.linspace(x1,x2,n),np.linspace(y1,y2,n))
z1 = np.sqrt(x11**2+y11**2)
a = np.where((z1>(r1)) & (z1<(r2)))
fill_factor = len(a[0])/(n*n)
return fill_factor
Next I tried to optimize this function with the jit decorator from numba. When I use:
nopython = True
The function is faster and gives the right output. But when I also add:
parallel = True
The function is faster but gives the wrong result.
I know that this has something to do with my z matrix since that is not being updated properly.
#jit(nopython=True,parallel=True)
def func_2(x1,y1,x2,y2,r1,r2,n):
x_ = np.linspace(x1,x2,n)
y_ = np.linspace(y1,y2,n)
z1 = np.zeros((n,n))
for i in range(n):
for j in range(n):
z1[i][j] = np.sqrt((x_[i]*x_[i]+y_[j]*y_[j]))
a = np.where((z1>(r1)) & (z1<(r2)))
fill_factor = len(a[0])/(n*n)
return fill_factor
Test values :
x1 = 1.0
x2 = -1.0
y1 = 1.0
y2 = -1.0
r1 = 0.5
r2 = 0.75
n = 25000
Additional info : Python version : 3.6.1, Numba version : 0.34.0+5.g1762237, NumPy version : 1.13.1
The problem with parallel=True is that it's a black-box. Numba doesn't even guarantee that it will actually parallelize anything. It uses heuristics to find out if it's parallelizable and what could be done in parallel. These can fail and in your example they do fail, just like in my experiments with parallel and numba. That makes parallel untrustworthy and I would advise against using it!
In newer versions (0.34) prange was added an you could have more luck with that. It can't be applied in this case because prange works like range and that's different from np.linspace...
Just a note: You can avoid building z and doing the np.where in your function completely, you could just do the checks explicitly:
import numpy as np
import numba as nb
#nb.njit # equivalent to "jit(nopython=True)".
def func_2(x1,y1,x2,y2,r1,r2,n):
x_ = np.linspace(x1,x2,n)
y_ = np.linspace(y1,y2,n)
cnts = 0
for i in range(n):
for j in range(n):
z = np.sqrt(x_[i] * x_[i] + y_[j] * y_[j])
if r1 < z < r2:
cnts += 1
fill_factor = cnts/(n*n)
return fill_factor
That should also provide some speedup compared to your function, maybe even more than using parallel=True (if it would work correctly).

How to create an array that can be accessed according to its indices in Numpy?

I am trying to solve the following problem via a Finite Difference Approximation in Python using NumPy:
$u_t = k \, u_{xx}$, on $0 < x < L$ and $t > 0$;
$u(0,t) = u(L,t) = 0$;
$u(x,0) = f(x)$.
I take $u(x,0) = f(x) = x^2$ for my problem.
Programming is not my forte so I need help with the implementation of my code. Here is my code (I'm sorry it is a bit messy, but not too bad I hope):
## This program is to implement a Finite Difference method approximation
## to solve the Heat Equation, u_t = k * u_xx,
## in 1D w/out sources & on a finite interval 0 < x < L. The PDE
## is subject to B.C: u(0,t) = u(L,t) = 0,
## and the I.C: u(x,0) = f(x).
import numpy as np
import matplotlib.pyplot as plt
# definition of initial condition function
def f(x):
return x^2
# parameters
L = 1
T = 10
N = 10
M = 100
s = 0.25
# uniform mesh
x_init = 0
x_end = L
dx = float(x_end - x_init) / N
#x = np.zeros(N+1)
x = np.arange(x_init, x_end, dx)
x[0] = x_init
# time discretization
t_init = 0
t_end = T
dt = float(t_end - t_init) / M
#t = np.zeros(M+1)
t = np.arange(t_init, t_end, dt)
t[0] = t_init
# Boundary Conditions
for m in xrange(0, M):
t[m] = m * dt
# Initial Conditions
for j in xrange(0, N):
x[j] = j * dx
# definition of solution to u_t = k * u_xx
u = np.zeros((N+1, M+1)) # NxM array to store values of the solution
# finite difference scheme
for j in xrange(0, N-1):
u[j][0] = x**2 #initial condition
for m in xrange(0, M):
for j in xrange(1, N-1):
if j == 1:
u[j-1][m] = 0 # Boundary condition
else:
u[j][m+1] = u[j][m] + s * ( u[j+1][m] - #FDM scheme
2 * u[j][m] + u[j-1][m] )
else:
if j == N-1:
u[j+1][m] = 0 # Boundary Condition
print u, t, x
#plt.plot(t, u)
#plt.show()
So the first issue I am having is I am trying to create an array/matrix to store values for the solution. I wanted it to be an NxM matrix, but in my code I made the matrix (N+1)x(M+1) because I kept getting an error that the index was going out of bounds. Anyways how can I make such a matrix using numpy.array so as not to needlessly take up memory by creating a (N+1)x(M+1) matrix filled with zeros?
Second, how can I "access" such an array? The real solution u(x,t) is approximated by u(x[j], t[m]) were j is the jth spatial value, and m is the mth time value. The finite difference scheme is given by:
u(x[j],t[m+1]) = u(x[j],t[m]) + s * ( u(x[j+1],t[m]) - 2 * u(x[j],t[m]) + u(x[j-1],t[m]) )
(See here for the formulation)
I want to be able to implement the Initial Condition u(x[j],t[0]) = x**2 for all values of j = 0,...,N-1. I also need to implement Boundary Conditions u(x[0],t[m]) = 0 = u(x[N],t[m]) for all values of t = 0,...,M. Is the nested loop I created the best way to do this? Originally I tried implementing the I.C. and B.C. under two different for loops which I used to calculate values of the matrices x and t (in my code I still have comments placed where I tried to do this)
I think I am just not using the right notation but I cannot find anywhere in the documentation for NumPy how to "call" such an array so at to iterate through each value in the proposed scheme. Can anyone shed some light on what I am doing wrong?
Any help is very greatly appreciated. This is not homework but rather to understand how to program FDM for Heat Equation because later I will use similar methods to solve the Black-Scholes PDE.
EDIT: So when I run my code on line 60 (the last "else" that I use) I get an error that says invalid syntax, and on line 51 (u[j][0] = x**2 #initial condition) I get an error that reads "setting an array element with a sequence." What does that mean?

fast method with numpy for 2D Heat equation

I'm looking for a method for solve the 2D heat equation with python. I have already implemented the finite difference method but is slow motion (to make 100,000 simulations takes 30 minutes). The idea is to create a code in which the end can write,
for t in TIME:
DeltaU=f(U)
U=U+DeltaU*DeltaT
save(U)
How can I do that?
In the first form of my code, I used the 2D method of finite difference, my grill is 5000x250 (x, y). Now I would like to decrease the speed of computing and the idea is to find
DeltaU = f(u)
where U is a heat function. For implementation I used this source http://www.timteatro.net/2010/10/29/performance-python-solving-the-2d-diffusion-equation-with-numpy/ for 2D case, but the run time is more expensive for my necessity. Are there some methods to do this?
Maybe I must to work with the matrix
A=1/dx^2 (2 -1 0 0 ... 0
-1 2 -1 0 ... 0
0 -1 2 -1 ... 0
. .
. .
. .
0 ... -1 2)
but how to make this in the 2D problem? How to inserting Boundary conditions in A?
This is the code for the finite difference that I used:
Lx=5000 # physical length x vector in micron
Ly=250 # physical length y vector in micron
Nx = 100 # number of point of mesh along x direction
Ny = 50 # number of point of mesh along y direction
a = 0.001 # diffusion coefficent
dx = 1/Nx
dy = 1/Ny
dt = (dx**2*dy**2)/(2*a*(dx**2 + dy**2)) # it is 0.04
x = linspace(0.1,Lx, Nx)[np.newaxis] # vector to create mesh
y = linspace(0.1,Ly, Ny)[np.newaxis] # vector to create mesh
I=sqrt(x*y.T) #initial data for heat equation
u=np.ones(([Nx,Ny])) # u is the matrix referred to heat function
steps=100000
for m in range (0,steps):
du=np.zeros(([Nx,Ny]))
for i in range (1,Nx-1):
for j in range(1,Ny-1):
dux = ( u[i+1,j] - 2*u[i,j] + u[i-1, j] ) / dx**2
duy = ( u[i,j+1] - 2*u[i,j] + u[i, j-1] ) / dy**2
du[i,j] = dt*a*(dux+duy)
# Boundary Conditions
t1=(u[:,0]+u[:,1])/2
u[:,0]=t1
u[:,1]=t1
t2=(u[0,:]+u[1,:])/2
u[0,:]=t2
u[1,:]=t2
t3=(u[-1,:]+u[-2,:])/2
u[-1,:]=t3
u[-2,:]=t3
u[:,-1]=1
filename1='data_{:08d}.txt'
if m%100==0:
np.savetxt(filename1.format(m),u,delimiter='\t' )
For elaborate 100000 steps the run time is about 30 minutes. I would to optimize this code (with the idea presented in the initial lines) to have a run time about 5/10 minutes or minus. How can I do it?
There are some simple but tremendous improvements possible.
Just by introducing Dxx, Dyy = 1/(dx*dx), 1/(dy*dy) the runtime drops 25%. By using slices and avoiding for-loops, the code is now 400 times faster.
import numpy as np
def f_for(u):
for m in range(0, steps):
du = np.zeros_like(u)
for i in range(1, Nx-1):
for j in range(1, Ny-1):
dux = (u[i+1, j] - 2*u[i, j] + u[i-1, j]) / dx**2
duy = (u[i, j+1] - 2*u[i, j] + u[i, j-1]) / dy**2
du[i, j] = dt*a*(dux + duy)
return du
def f_slice(u):
du = np.zeros_like(u)
Dxx, Dyy = 1/dx**2, 1/dy**2
i = slice(1, Nx-1)
iw = slice(0, Nx-2)
ie = slice(2, Nx)
j = slice(1, Ny-1)
js = slice(0, Ny-2)
jn = slice(2, Ny)
for m in range(0, steps):
dux = Dxx * (u[ie, j] - 2*u[i, j] + u[iw, j])
duy = Dyy * (u[i, jn] - 2*u[i, j] + u[i, js])
du[i, j] = dt*a*(dux + duy)
return du
Nx = 100 # number of mesh points in the x-direction
Ny = 50 # number of mesh points in the y-direction
a = 0.001 # diffusion coefficent
dx = 1/Nx
dy = 1/Ny
dt = (dx**2*dy**2)/(2*a*(dx**2 + dy**2))
steps = 10000
U = np.ones((Nx, Ny))
%timeit f_for(U)
%timeit f_slice(U)
Have you considered paralellizing your code or using GPU acceleration.
It would help if you ran your code the python profiler (cProfile) so that you can figure out where you bottleneck in runtime is. I'm assuming it's in solving the matrix equation you get to which can be easily sped up by the methods I listed above.
I might be wrong but in your code in the loop you create for the time steps, "m in range(steps)"
in the one line below you continue with;
Du =np.zeros(----).
Not an expert of python but this may be resulting in creating a sparse matrix in the number of steps, in this case 100k times.

Correctly annotate a numba function using jit

I started with this code to calculate a simple matrix multiplication. It runs with %timeit in around 7.85s on my machine.
To try to speed this up I tried cython which reduced the time to 0.4s. I want to also try to use numba jit compiler to see if I can get similar speed ups (with less effort). But adding the #jit annotation appears to give exactly the same timings (~7.8s). I know it can't figure out the types of the calculate_z_numpy() call but I'm not sure what I can do to coerce it. Any ideas?
from numba import jit
import numpy as np
#jit('f8(c8[:],c8[:],uint)')
def calculate_z_numpy(q, z, maxiter):
"""use vector operations to update all zs and qs to create new output array"""
output = np.resize(np.array(0, dtype=np.int32), q.shape)
for iteration in range(maxiter):
z = z*z + q
done = np.greater(abs(z), 2.0)
q = np.where(done, 0+0j, q)
z = np.where(done, 0+0j, z)
output = np.where(done, iteration, output)
return output
def calc_test():
w = h = 1000
maxiter = 1000
# make a list of x and y values which will represent q
# xx and yy are the co-ordinates, for the default configuration they'll look like:
# if we have a 1000x1000 plot
# xx = [-2.13, -2.1242,-2.1184000000000003, ..., 0.7526000000000064, 0.7584000000000064, 0.7642000000000064]
# yy = [1.3, 1.2948, 1.2895999999999999, ..., -1.2844000000000058, -1.2896000000000059, -1.294800000000006]
x1, x2, y1, y2 = -2.13, 0.77, -1.3, 1.3
x_step = (float(x2 - x1) / float(w)) * 2
y_step = (float(y1 - y2) / float(h)) * 2
y = np.arange(y2,y1-y_step,y_step,dtype=np.complex)
x = np.arange(x1,x2,x_step)
q1 = np.empty(y.shape[0],dtype=np.complex)
q1.real = x
q1.imag = y
# Transpose y
x_y_square_matrix = x+y[:, np.newaxis] # it is np.complex128
# convert square matrix to a flatted vector using ravel
q2 = np.ravel(x_y_square_matrix)
# create z as a 0+0j array of the same length as q
# note that it defaults to reals (float64) unless told otherwise
z = np.zeros(q2.shape, np.complex128)
output = calculate_z_numpy(q2, z, maxiter)
print(output)
calc_test()
I figured out how to do this with some help from someone else.
#jit('i4[:](c16[:],c16[:],i4,i4[:])',nopython=True)
def calculate_z_numpy(q, z, maxiter,output):
"""use vector operations to update all zs and qs to create new output array"""
for iteration in range(maxiter):
for i in range(len(z)):
z[i] = z[i] + q[i]
if z[i] > 2:
output[i] = iteration
z[i] = 0+0j
q[i] = 0+0j
return output
What I learnt is that use numpy datastructures as inputs (for typing), but within use c like paradigms for looping.
This runs in 402ms which is a touch faster than cython code 0.45s so for fairly minimal work in rewriting the loop explicitly we have a python version faster than C(just).

Categories