Python scipy.numpy.convolve and scipy.signal.fftconvolve different results - python

i am having 2 arrays (G and G_). They have the same shape and size and i want to convolve them. i found the numpy.convolve and fftconvolve.
My Code is like:
foldedX = getFoldGradientsFFT(G, G_)
foldedY = getFoldGradientsNumpy(G, G_)
def getFoldGradientsFFT(G, G_):
# convolve via scipy fast fourier transform
X =signal.fftconvolve(G,G_, "same)
X*=255.0/numpy.max(X);
return X
def getFoldGradientsNumpy(G, G_):
# convolve via numpy.convolve
Y = ndimage.convolve(G, G_)
Y*=255.0/numpy.max(Y);
return Y
But the results aren't the same.
The result is like:
Numpy.concolve()
[ 11.60287582 3.28262652 18.80395211 52.75829556 99.61675945
147.74124258 187.66178244 215.06160439 234.1907606 229.04221552]
scipy.signal.fftconvolve:
[ -4.88130620e-15 6.74371119e-02 4.91875539e+00 1.94250997e+01
3.88227012e+01 6.70322921e+01 9.78460423e+01 1.08486302e+02
1.17267015e+02 1.15691562e+02]
I thought the result is supposed to be the same, even if the two functions convolves with a different procedure?!
i forgot to mention, that i want to convolve 2 2-dimensional arrays :S
the arrays:
G = array([[1,2],[3,4]])
G_ = array([[5,6],[7,8]])
the code
def getFoldGradientsFFT(G, G_):
X =signal.fftconvolve(G,G_,"same")
X=X.astype("int")
X*=255.0/np.max(X);
return X
def getFoldGradientsNumpy(G, G_):
# convolve via convolve
old_shape = G.shape
G = np.reshape(G, G.size)
G_ = np.reshape(G_, G.size)
Y = np.convolve(G, G_, "same")
Y = np.reshape(Y,old_shape)
Y = Y.astype("int")
Y*=255.0/np.max(Y);
return Y
def getFoldGradientsNDImage(G, G_):
Y = ndimage.convolve(G, G_)
Y = Y.astype("int")
Y *= 255.0/np.max(Y)
return Y
the results:
getFoldGradientsFFT
[[ 21 68]
[ 93 255]]
getFoldGradientsNumpy
[[ 66 142]
[250 255]]
getFoldGradientsNDImage
[[147 181]
[220 255]]

numpy.convolve is for one-dimensional data.
The following code compares the results of signal.convolve, signal.fftconvolve, and ndimage.convolve.
for ndimage.convolve, we need to set mode argument to "constant", and origin argument to -1 when N is even, and 0 when N is odd.
from scipy import signal
from scipy import ndimage
import numpy as np
np.random.seed(1)
for N in xrange(2, 20):
a = np.random.randint(0, 10, size=(N, N))
b = np.random.randint(0, 10, size=(N, N))
r1 = signal.convolve(a, b, mode="same")
r2 = signal.fftconvolve(a, b, mode="same")
r3 = ndimage.convolve(a, b, mode="constant", origin=-1 if N%2==0 else 0)
print "N=", N
print np.allclose(r1, r2)
print np.allclose(r2, r3)

getFoldGradientsNumpy is using scipy.ndimage.convolve. That does multi-dimensional convolution and is not the same as scipy.convolve.
For me, when convolving two one-dimensional arrays, scipy.convolve, scipy.signal.convolve, and scipy.signal.fftconvolve all return the same answer.

Related

Why am I getting an incorrect result from multiplying an inverted matrix by a vector?

I'm trying to learn Python for basic work in linear algebra. I'm running into the following problem with a simple system of linear equations:
import scipy.linalg as la
import numpy as np
A = np.array([[186/450, 54/21, 30/60],
[12/450, 6/21 , 3/60],
[9/450, 6/21 , 15/60]])
l = np.array([18/450, 12/21, 30/36])
b = np.array([2, 0, 1/6])
y = np.array([180, 0, 30])
x = la.inv(np.eye(3) - A) # y
lam = np.transpose(l) # la.inv(np.eye(3) - A)
This returns
array([0.21212121, 2.12121212, 1.39393939])
which is incorrect. Performing the same operation in Julia,
A = [186/450 54/21 30/60;
12/450 6/21 3/60;
9/450 6/21 15/60]
l = [18/450, 12/21, 30/60]
b = [2, 0, 1/6]
y = [180, 0, 30]
λ = l' * inv(I - A)
yields the correct result, which is
1×3 adjoint(::Vector{Float64}) with eltype Float64:
0.181818 1.81818 0.909091
What am I missing here? I think I might be missing something in the opaque numpy array syntax.
There is a typo in l instantiation in your python code. (30/36 should be 30/60).
This code with the typo fixed produces the same result as in Julia.
import scipy.linalg as la
import numpy as np
A = np.array([[186/450, 54/21, 30/60],
[12/450, 6/21 , 3/60],
[9/450, 6/21 , 15/60]])
l = np.array([18/450, 12/21, 30/60]) #typo fixed here
b = np.array([2, 0, 1/6])
y = np.array([180, 0, 30])
x = la.inv(np.eye(3) - A) # y
lam = np.transpose(l) # la.inv(np.eye(3) - A)
Giving:
array([0.18181818, 1.81818182, 0.90909091])

Python numpy matrix of matrix

I have this code, and it works. It just seems like there may be a better way to do this. Does anyone know a cleaner solution?
def Matrix2toMatrix(Matrix2):
scaleSize = len(Matrix2[0, 0])
FinalMatrix = np.empty([len(Matrix2)*scaleSize, len(Matrix2[0])*scaleSize])
for x in range(0, len(Matrix2)):
for y in range(0, len(Matrix2[0])):
for xFinal in range(0, scaleSize):
for yFinal in range(0, scaleSize):
FinalMatrix[(x*scaleSize)+xFinal, (y*scaleSize)+yFinal] = Matrix2[x, y][xFinal, yFinal]
return FinalMatrix
This is where Matrix2 is a 4x4 matrix, with each cell containing a 2x2 matrix
Full code in case anyone was wondering:
import matplotlib.pyplot as plt
import numpy as np
def Matrix2toMatrix(Matrix2):
scaleSize = len(Matrix2[0, 0])
FinalMatrix = np.empty([len(Matrix2)*scaleSize, len(Matrix2[0])*scaleSize])
for x in range(0, len(Matrix2)):
for y in range(0, len(Matrix2[0])):
for xFinal in range(0, scaleSize):
for yFinal in range(0, scaleSize):
FinalMatrix[(x*scaleSize)+xFinal, (y*scaleSize)+yFinal] = Matrix2[x, y][xFinal, yFinal]
return FinalMatrix
XSize = 4
Xtest = np.array([[255, 255, 255, 255]
,[255, 255, 255, 255]
,[127, 127, 127, 127]
,[0, 0, 0, 0]
])
scaleFactor = 2
XMarixOfMatrix = np.empty([XSize, XSize], dtype=object)
Xexpanded = np.empty([XSize*scaleFactor, XSize*scaleFactor], dtype=int) # careful, will contain garbage data
for xOrg in range(0, XSize):
for yOrg in range(0, XSize):
newMatrix = np.empty([scaleFactor, scaleFactor], dtype=int) # careful, will contain garbage data
# grab org point equivalent
pointValue = Xtest[xOrg, yOrg]
newMatrix.fill(pointValue)
# now write the data
XMarixOfMatrix[xOrg, yOrg] = newMatrix
# need to concat all matrix together to form a larger singular matrix
Xexpanded = Matrix2toMatrix(XMarixOfMatrix)
img = plt.imshow(Xexpanded)
img.set_cmap('gray')
plt.axis('off')
plt.show()
Permute axes and reshape -
m,n = Matrix2.shape[0], Matrix2.shape[2]
out = Matrix2.swapaxes(1,2).reshape(m*n,-1)
For permuting axes, we could also use np.transpose or np.rollaxis, as functionally all are the same.
Verify with sample run -
In [17]: Matrix2 = np.random.rand(3,3,3,3)
# With given solution
In [18]: out1 = Matrix2toMatrix(Matrix2)
In [19]: m,n = Matrix2.shape[0], Matrix2.shape[2]
...: out2 = Matrix2.swapaxes(1,2).reshape(m*n,-1)
In [20]: np.allclose(out1, out2)
Out[20]: True

How to get the index of a list items in another list?

Consider I have these lists:
l = [5,6,7,8,9,10,5,15,20]
m = [10,5]
I want to get the index of m in l. I used list comprehension to do that:
[(i,i+1) for i,j in enumerate(l) if m[0] == l[i] and m[1] == l[i+1]]
Output : [(5,6)]
But if I have more numbers in m, I feel its not the right way. So is there any easy approach in Python or with NumPy?
Another example:
l = [5,6,7,8,9,10,5,15,20,50,16,18]
m = [10,5,15,20]
The output should be:
[(5,6,7,8)]
The easiest way (using pure Python) would be to iterate over the items and first only check if the first item matches. This avoids doing sublist comparisons when not needed. Depending on the contents of your l this could outperform even NumPy broadcasting solutions:
def func(haystack, needle): # obviously needs a better name ...
if not needle:
return
# just optimization
lengthneedle = len(needle)
firstneedle = needle[0]
for idx, item in enumerate(haystack):
if item == firstneedle:
if haystack[idx:idx+lengthneedle] == needle:
yield tuple(range(idx, idx+lengthneedle))
>>> list(func(l, m))
[(5, 6, 7, 8)]
In case your interested in speed I checked the performance of the approaches (borrowing from my setup here):
import random
import numpy as np
# strided_app is from https://stackoverflow.com/a/40085052/
def strided_app(a, L, S ): # Window len = L, Stride len/stepsize = S
nrows = ((a.size-L)//S)+1
n = a.strides[0]
return np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n))
def pattern_index_broadcasting(all_data, search_data):
n = len(search_data)
all_data = np.asarray(all_data)
all_data_2D = strided_app(np.asarray(all_data), n, S=1)
return np.flatnonzero((all_data_2D == search_data).all(1))
# view1D is from https://stackoverflow.com/a/45313353/
def view1D(a, b): # a, b are arrays
a = np.ascontiguousarray(a)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
def pattern_index_view1D(all_data, search_data):
a = strided_app(np.asarray(all_data), L=len(search_data), S=1)
a0v, b0v = view1D(np.asarray(a), np.asarray(search_data))
return np.flatnonzero(np.in1d(a0v, b0v))
def find_sublist_indices(haystack, needle):
if not needle:
return
# just optimization
lengthneedle = len(needle)
firstneedle = needle[0]
restneedle = needle[1:]
for idx, item in enumerate(haystack):
if item == firstneedle:
if haystack[idx+1:idx+lengthneedle] == restneedle:
yield tuple(range(idx, idx+lengthneedle))
def Divakar1(l, m):
return np.squeeze(pattern_index_broadcasting(l, m)[:,None] + np.arange(len(m)))
def Divakar2(l, m):
return np.squeeze(pattern_index_view1D(l, m)[:,None] + np.arange(len(m)))
def MSeifert(l, m):
return list(find_sublist_indices(l, m))
# Timing setup
timings = {Divakar1: [], Divakar2: [], MSeifert: []}
sizes = [2**i for i in range(5, 20, 2)]
# Timing
for size in sizes:
l = [random.randint(0, 50) for _ in range(size)]
m = [random.randint(0, 50) for _ in range(10)]
larr = np.asarray(l)
marr = np.asarray(m)
for func in timings:
# first timings:
# res = %timeit -o func(l, m)
# second timings:
if func is MSeifert:
res = %timeit -o func(l, m)
else:
res = %timeit -o func(larr, marr)
timings[func].append(res)
%matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(1)
ax = plt.subplot(111)
for func in timings:
ax.plot(sizes,
[time.best for time in timings[func]],
label=str(func.__name__))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('size')
ax.set_ylabel('time [seconds]')
ax.grid(which='both')
ax.legend()
plt.tight_layout()
In case your l and m are lists my function outperforms the NumPy solutions for all sizes:
But in case you have these as numpy arrays you'll get faster results for large arrays (size > 1000 elements) when using Divakars NumPy solutions:
You are basically looking for the starting indices of a list in another list.
Approach #1 : One approach to solve it would be to create sliding windows of the elements in list in which we are searching, giving us a 2D array and then simply use NumPy broadcasting to perform broadcasted comparison against the search list against each row of the 2D sliding window version obtained earlier. Thus, one method would be -
# strided_app is from https://stackoverflow.com/a/40085052/
def strided_app(a, L, S ): # Window len = L, Stride len/stepsize = S
nrows = ((a.size-L)//S)+1
n = a.strides[0]
return np.lib.stride_tricks.as_strided(a, shape=(nrows,L), strides=(S*n,n))
def pattern_index_broadcasting(all_data, search_data):
n = len(search_data)
all_data = np.asarray(all_data)
all_data_2D = strided_app(np.asarray(all_data), n, S=1)
return np.flatnonzero((all_data_2D == search_data).all(1))
out = np.squeeze(pattern_index_broadcasting(l, m)[:,None] + np.arange(len(m)))
Sample runs -
In [340]: l = [5,6,7,8,9,10,5,15,20,50,16,18]
...: m = [10,5,15,20]
...:
In [341]: np.squeeze(pattern_index_broadcasting(l, m)[:,None] + np.arange(len(m)))
Out[341]: array([5, 6, 7, 8])
In [342]: l = [5,6,7,8,9,10,5,15,20,50,16,18,10,5,15,20]
...: m = [10,5,15,20]
...:
In [343]: np.squeeze(pattern_index_broadcasting(l, m)[:,None] + np.arange(len(m)))
Out[343]:
array([[ 5, 6, 7, 8],
[12, 13, 14, 15]])
Approach #2 : Another method would be to get the sliding window and then get the row-wise scalar view into the data to be search data and the data to be search for, giving us 1D data to work with, like so -
# view1D is from https://stackoverflow.com/a/45313353/
def view1D(a, b): # a, b are arrays
a = np.ascontiguousarray(a)
void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1]))
return a.view(void_dt).ravel(), b.view(void_dt).ravel()
def pattern_index_view1D(all_data, search_data):
a = strided_app(np.asarray(all_data), L=len(search_data), S=1)
a0v, b0v = view1D(np.asarray(a), np.asarray(search_data))
return np.flatnonzero(np.in1d(a0v, b0v))
out = np.squeeze(pattern_index_view1D(l, m)[:,None] + np.arange(len(m)))
2020 Versions
In search of more easy/compact approaches, we could look into scikit-image's view_as_windows for getting sliding windows with a built-in. I am assuming arrays as inputs for less messy code. For lists as input, we have to use np.asarray() as shown earlier.
Approach #3 : Basically a derivative of pattern_index_broadcasting with view_as_windows for a one-liner with a as the larger data and b is the array to be searched -
from skimage.util import view_as_windows
np.flatnonzero((view_as_windows(a,len(b))==b).all(1))[:,None]+np.arange(len(b))
Approach #4 : For a small number of matches from b in a, we could optimize, by looking for first element match from b to reduce the dataset size for searches -
mask = a[:-len(b)+1]==b[0]
mask[mask] = (view_as_windows(a,len(b))[mask]).all(1)
out = np.flatnonzero(mask)[:,None]+np.arange(len(b))
Approach #5 : For a small sized b, we could simply run a loop for each of the elements in b and perform bitwise and-reduction -
mask = np.bitwise_and.reduce([a[i:len(a)-len(b)+1+i]==b[i] for i in range(len(b))])
out = np.flatnonzero(mask)[:,None]+np.arange(len(b))
Just making the point that #MSeifert's approach can, of course, also be implemented in numpy:
def pp(h,n):
nn = len(n)
NN = len(h)
c = (h[:NN-nn+1]==n[0]).nonzero()[0]
if c.size==0: return
for i,l in enumerate(n[1:].tolist(),1):
c = c[h[i:][c]==l]
if c.size==0: return
return np.arange(c[0],c[0]+nn)
def get_data(l1,l2):
d=defaultdict(list)
[d[item].append(index) for index,item in enumerate(l1)]
print(d)
Using defaultdict to store indices of elements from other list.

How to calculate a Gaussian kernel matrix efficiently in numpy?

def GaussianMatrix(X,sigma):
row,col=X.shape
GassMatrix=np.zeros(shape=(row,row))
X=np.asarray(X)
i=0
for v_i in X:
j=0
for v_j in X:
GassMatrix[i,j]=Gaussian(v_i.T,v_j.T,sigma)
j+=1
i+=1
return GassMatrix
def Gaussian(x,z,sigma):
return np.exp((-(np.linalg.norm(x-z)**2))/(2*sigma**2))
This is my current way. Is there any way I can use matrix operation to do this? X is the data points.
I myself used the accepted answer for my image processing, but I find it (and the other answers) too dependent on other modules. Therefore, here is my compact solution:
import numpy as np
def gkern(l=5, sig=1.):
"""\
creates gaussian kernel with side length `l` and a sigma of `sig`
"""
ax = np.linspace(-(l - 1) / 2., (l - 1) / 2., l)
gauss = np.exp(-0.5 * np.square(ax) / np.square(sig))
kernel = np.outer(gauss, gauss)
return kernel / np.sum(kernel)
Edit: Changed arange to linspace to handle even side lengths
Edit: Use separability for faster computation, thank you Yves Daoust.
Do you want to use the Gaussian kernel for e.g. image smoothing? If so, there's a function gaussian_filter() in scipy:
Updated answer
This should work - while it's still not 100% accurate, it attempts to account for the probability mass within each cell of the grid. I think that using the probability density at the midpoint of each cell is slightly less accurate, especially for small kernels. See https://homepages.inf.ed.ac.uk/rbf/HIPR2/gsmooth.htm for an example.
import numpy as np
import scipy.stats as st
def gkern(kernlen=21, nsig=3):
"""Returns a 2D Gaussian kernel."""
x = np.linspace(-nsig, nsig, kernlen+1)
kern1d = np.diff(st.norm.cdf(x))
kern2d = np.outer(kern1d, kern1d)
return kern2d/kern2d.sum()
Testing it on the example in Figure 3 from the link:
gkern(5, 2.5)*273
gives
array([[ 1.0278445 , 4.10018648, 6.49510362, 4.10018648, 1.0278445 ],
[ 4.10018648, 16.35610171, 25.90969361, 16.35610171, 4.10018648],
[ 6.49510362, 25.90969361, 41.0435344 , 25.90969361, 6.49510362],
[ 4.10018648, 16.35610171, 25.90969361, 16.35610171, 4.10018648],
[ 1.0278445 , 4.10018648, 6.49510362, 4.10018648, 1.0278445 ]])
The original (accepted) answer below accepted is wrong
The square root is unnecessary, and the definition of the interval is incorrect.
import numpy as np
import scipy.stats as st
def gkern(kernlen=21, nsig=3):
"""Returns a 2D Gaussian kernel array."""
interval = (2*nsig+1.)/(kernlen)
x = np.linspace(-nsig-interval/2., nsig+interval/2., kernlen+1)
kern1d = np.diff(st.norm.cdf(x))
kernel_raw = np.sqrt(np.outer(kern1d, kern1d))
kernel = kernel_raw/kernel_raw.sum()
return kernel
I'm trying to improve on FuzzyDuck's answer here. I think this approach is shorter and easier to understand. Here I'm using signal.scipy.gaussian to get the 2D gaussian kernel.
import numpy as np
from scipy import signal
def gkern(kernlen=21, std=3):
"""Returns a 2D Gaussian kernel array."""
gkern1d = signal.gaussian(kernlen, std=std).reshape(kernlen, 1)
gkern2d = np.outer(gkern1d, gkern1d)
return gkern2d
Plotting it using matplotlib.pyplot:
import matplotlib.pyplot as plt
plt.imshow(gkern(21), interpolation='none')
You may simply gaussian-filter a simple 2D dirac function, the result is then the filter function that was being used:
import numpy as np
import scipy.ndimage.filters as fi
def gkern2(kernlen=21, nsig=3):
"""Returns a 2D Gaussian kernel array."""
# create nxn zeros
inp = np.zeros((kernlen, kernlen))
# set element at the middle to one, a dirac delta
inp[kernlen//2, kernlen//2] = 1
# gaussian-smooth the dirac, resulting in a gaussian filter mask
return fi.gaussian_filter(inp, nsig)
I tried using numpy only. Here is the code
def get_gauss_kernel(size=3,sigma=1):
center=(int)(size/2)
kernel=np.zeros((size,size))
for i in range(size):
for j in range(size):
diff=np.sqrt((i-center)**2+(j-center)**2)
kernel[i,j]=np.exp(-(diff**2)/(2*sigma**2))
return kernel/np.sum(kernel)
You can visualise the result using:
plt.imshow(get_gauss_kernel(5,1))
A 2D gaussian kernel matrix can be computed with numpy broadcasting,
def gaussian_kernel(size=21, sigma=3):
"""Returns a 2D Gaussian kernel.
Parameters
----------
size : float, the kernel size (will be square)
sigma : float, the sigma Gaussian parameter
Returns
-------
out : array, shape = (size, size)
an array with the centered gaussian kernel
"""
x = np.linspace(- (size // 2), size // 2)
x /= np.sqrt(2)*sigma
x2 = x**2
kernel = np.exp(- x2[:, None] - x2[None, :])
return kernel / kernel.sum()
For small kernel sizes this should be reasonably fast.
Note: this makes changing the sigma parameter easier with respect to the accepted answer.
If you are a computer vision engineer and you need heatmap for a particular point as Gaussian distribution(especially for keypoint detection on image)
def gaussian_heatmap(center = (2, 2), image_size = (10, 10), sig = 1):
"""
It produces single gaussian at expected center
:param center: the mean position (X, Y) - where high value expected
:param image_size: The total image size (width, height)
:param sig: The sigma value
:return:
"""
x_axis = np.linspace(0, image_size[0]-1, image_size[0]) - center[0]
y_axis = np.linspace(0, image_size[1]-1, image_size[1]) - center[1]
xx, yy = np.meshgrid(x_axis, y_axis)
kernel = np.exp(-0.5 * (np.square(xx) + np.square(yy)) / np.square(sig))
return kernel
The usage and output
kernel = gaussian_heatmap(center = (2, 2), image_size = (10, 10), sig = 1)
plt.imshow(kernel)
print("max at :", np.unravel_index(kernel.argmax(), kernel.shape))
print("kernel shape", kernel.shape)
max at : (2, 2)
kernel shape (10, 10)
kernel = gaussian_heatmap(center = (25, 40), image_size = (100, 50), sig = 5)
plt.imshow(kernel)
print("max at :", np.unravel_index(kernel.argmax(), kernel.shape))
print("kernel shape", kernel.shape)
max at : (40, 25)
kernel shape (50, 100)
linalg.norm takes an axis parameter. With a little experimentation I found I could calculate the norm for all combinations of rows with
np.linalg.norm(x[None,:,:]-x[:,None,:],axis=2)
It expands x into a 3d array of all differences, and takes the norm on the last dimension.
So I can apply this to your code by adding the axis parameter to your Gaussian:
def Gaussian(x,z,sigma,axis=None):
return np.exp((-(np.linalg.norm(x-z, axis=axis)**2))/(2*sigma**2))
x=np.arange(12).reshape(3,4)
GaussianMatrix(x,1)
produces
array([[ 1.00000000e+00, 1.26641655e-14, 2.57220937e-56],
[ 1.26641655e-14, 1.00000000e+00, 1.26641655e-14],
[ 2.57220937e-56, 1.26641655e-14, 1.00000000e+00]])
Matching:
Gaussian(x[None,:,:],x[:,None,:],1,axis=2)
array([[ 1.00000000e+00, 1.26641655e-14, 2.57220937e-56],
[ 1.26641655e-14, 1.00000000e+00, 1.26641655e-14],
[ 2.57220937e-56, 1.26641655e-14, 1.00000000e+00]])
Building up on Teddy Hartanto's answer. You can just calculate your own one dimensional Gaussian functions and then use np.outer to calculate the two dimensional one. Very fast and efficient way.
With the code below you can also use different Sigmas for every dimension
import numpy as np
def generate_gaussian_mask(shape, sigma, sigma_y=None):
if sigma_y==None:
sigma_y=sigma
rows, cols = shape
def get_gaussian_fct(size, sigma):
fct_gaus_x = np.linspace(0,size,size)
fct_gaus_x = fct_gaus_x-size/2
fct_gaus_x = fct_gaus_x**2
fct_gaus_x = fct_gaus_x/(2*sigma**2)
fct_gaus_x = np.exp(-fct_gaus_x)
return fct_gaus_x
mask = np.outer(get_gaussian_fct(rows,sigma), get_gaussian_fct(cols,sigma_y))
return mask
A good way to do that is to use the gaussian_filter function to recover the kernel.
For instance:
indicatrice = np.zeros((5,5))
indicatrice[2,2] = 1
gaussian_kernel = gaussian_filter(indicatrice, sigma=1)
gaussian_kernel/=gaussian_kernel[2,2]
This gives
array[[0.02144593, 0.08887207, 0.14644428, 0.08887207, 0.02144593],
[0.08887207, 0.36828649, 0.60686612, 0.36828649, 0.08887207],
[0.14644428, 0.60686612, 1. , 0.60686612, 0.14644428],
[0.08887207, 0.36828649, 0.60686612, 0.36828649, 0.08887207],
[0.02144593, 0.08887207, 0.14644428, 0.08887207, 0.02144593]]
Adapting th accepted answer by FuzzyDuck to match the results of this website: http://dev.theomader.com/gaussian-kernel-calculator/ I now present this definition to you:
import numpy as np
import scipy.stats as st
def gkern(kernlen=21, sig=3):
"""Returns a 2D Gaussian kernel."""
x = np.linspace(-(kernlen/2)/sig, (kernlen/2)/sig, kernlen+1)
kern1d = np.diff(st.norm.cdf(x))
kern2d = np.outer(kern1d, kern1d)
return kern2d/kern2d.sum()
print(gkern(kernlen=5, sig=1))
output:
[[0.003765 0.015019 0.02379159 0.015019 0.003765 ]
[0.015019 0.05991246 0.0949073 0.05991246 0.015019 ]
[0.02379159 0.0949073 0.15034262 0.0949073 0.02379159]
[0.015019 0.05991246 0.0949073 0.05991246 0.015019 ]
[0.003765 0.015019 0.02379159 0.015019 0.003765 ]]
As I didn't find what I was looking for, I coded my own one-liner. You can modify it accordingly (according to the dimensions and the standard deviation).
Here is the one-liner function for a 3x5 patch for example.
from scipy import signal
def gaussian2D(patchHeight, patchWidth, stdHeight=1, stdWidth=1):
gaussianWindow = signal.gaussian(patchHeight, stdHeight).reshape(-1, 1)#signal.gaussian(patchWidth, stdWidth).reshape(1, -1)
return gaussianWindow
print(gaussian2D(3, 5))
You get an output like this:
[[0.082085 0.36787944 0.60653066 0.36787944 0.082085 ]
[0.13533528 0.60653066 1. 0.60653066 0.13533528]
[0.082085 0.36787944 0.60653066 0.36787944 0.082085 ]]
You can read more about scipy's Gaussian here.
Yet another implementation.
This is normalized so that for sigma > 1 and sufficiently large win_size, the total sum of the kernel elements equals 1.
def gaussian_kernel(win_size, sigma):
t = np.arange(win_size)
x, y = np.meshgrid(t, t)
o = (win_size - 1) / 2
r = np.sqrt((x - o)**2 + (y - o)**2)
scale = 1 / (sigma**2 * 2 * np.pi)
return scale * np.exp(-0.5 * (r / sigma)**2)
To generate a 5x5 kernel:
gaussian_kernel(win_size=5, sigma=1)
I took a similar approach to Nils Werner's answer -- since convolution of any kernel with a Kronecker delta results in the kernel itself centered around that Kronecker delta -- but I made it slightly more general to deal with both odd and even dimensions. In three lines:
import scipy.ndimage as scim
def gaussian_kernel(dimension: int, sigma: float):
dirac = np.zeros((dimension,dimension))
dirac[(dimension-1)//2:dimension//2+1, (dimension-1)//2:dimension//2+1] = 1.0 / (1 + 3 * ((dimension + 1) % 2))
return scim.gaussian_filter(dirac, sigma=sigma)
The second line creates either a single 1.0 in the middle of the matrix (if the dimension is odd), or a square of four 0.25 elements (if the dimension is even). The division could be moved to the third line too; the result is normalised either way.
For those who like to have the kernel the matrix with one (odd) or four (even) 1.0 element(s) in the middle instead of normalisation, this works:
import scipy.ndimage as scim
def gaussian_kernel(dimension: int, sigma: float, ones_in_the_middle=False):
dirac = np.zeros((dimension,dimension))
dirac[(dimension-1)//2:dimension//2+1, (dimension-1)//2:dimension//2+1] = 1.0
kernel = scim.gaussian_filter(dirac, sigma=sigma)
divisor = kernel[(dimension-1)//2, (dimension-1)//2] if ones_in_the_middle else 1 + 3 * ((dimension + 1) % 2)
return kernel/divisor

how to apply a mask from one array to another array?

I've read the masked array documentation several times now, searched everywhere and feel thoroughly stupid. I can't figure out for the life in me how to apply a mask from one array to another.
Example:
import numpy as np
y = np.array([2,1,5,2]) # y axis
x = np.array([1,2,3,4]) # x axis
m = np.ma.masked_where(y>2, y) # filter out values larger than 5
print m
[2 1 -- 2]
print np.ma.compressed(m)
[2 1 2]
So this works fine.... but to plot this y axis, I need a matching x axis. How do I apply the mask from the y array to the x array? Something like this would make sense, but produces rubbish:
new_x = x[m.mask].copy()
new_x
array([5])
So, how on earth is that done (note the new x array needs to be a new array).
Edit:
Well, it seems one way to do this works like this:
>>> import numpy as np
>>> x = np.array([1,2,3,4])
>>> y = np.array([2,1,5,2])
>>> m = np.ma.masked_where(y>2, y)
>>> new_x = np.ma.masked_array(x, m.mask)
>>> print np.ma.compressed(new_x)
[1 2 4]
But that's incredibly messy! I'm trying to find a solution as elegant as IDL...
I had a similar issue, but involving loads more masking commands and more arrays to apply them. My solution is that I do all the masking on one array and then use the finally masked array as the condition in the mask_where command.
For example:
y = np.array([2,1,5,2]) # y axis
x = np.array([1,2,3,4]) # x axis
m = np.ma.masked_where(y>5, y) # filter out values larger than 5
new_x = np.ma.masked_where(np.ma.getmask(m), x) # applies the mask of m on x
The nice thing is you can now apply this mask to many more arrays without going through the masking process for each of them.
Why not simply
import numpy as np
y = np.array([2,1,5,2]) # y axis
x = np.array([1,2,3,4]) # x axis
m = np.ma.masked_where(y>2, y) # filter out values larger than 5
print list(m)
print np.ma.compressed(m)
# mask x the same way
m_ = np.ma.masked_where(y>2, x) # filter out values larger than 5
# print here the list
print list(m_)
print np.ma.compressed(m_)
code is for Python 2.x
Also, as proposed by joris, this do the work new_x = x[~m.mask].copy() giving an array
>>> new_x
array([1, 2, 4])
This may not bee 100% what OP wanted to know,
but it's a cute little piece of code I use all the time -
if you want to mask several arrays the same way, you can use this generalized function to mask a dynamic number of numpy arrays at once:
def apply_mask_to_all(mask, *arrays):
assert all([arr.shape == mask.shape for arr in arrays]), "All Arrays need to have the same shape as the mask"
return tuple([arr[mask] for arr in arrays])
See this example usage:
# init 4 equally shaped arrays
x1 = np.random.rand(3,4)
x2 = np.random.rand(3,4)
x3 = np.random.rand(3,4)
x4 = np.random.rand(3,4)
# create a mask
mask = x1 > 0.8
# apply the mask to all arrays at once
x1, x2, x3, x4 = apply_mask_to_all(m, x1, x2, x3, x4)

Categories