What is the Python equivalent for MATLAB's sgolay(k, f)? - python

I have a function in MATLAB
[b,g] = sgolay(k, f);
It outputs a f x f matrix.
When I run the same for the same values of k and f in Python, using:
scipy.signal.savgol_coeffs(f, k)
It outputs an entirely different array of only f elements.
The values in consideration are:
k = 4, f = 21
savol_filter() takes three arguments, including the array, whereas sgolay() takes only two. Also, the savo_coeffs is not generating the required matrix.
What is the Python equivalent for obtaining the matrix generated by sgolay(k, f) in matlab?

If you inspect the matrix b returned by Matlab's sgolay function, you'll see that the center row is the same as the 1-d array returned by SciPy's savgol_coeffs. The upper and lower halves of b, with (framelen - 1)/2 rows in each part, are the coefficients of the Savitzky-Golay filters to be applied to the ends of the signal, where the filter is not symmetric. That is, for the (framelen - 1)/2 values at each end of the signal, each filtered value is computed using a different set of coefficients.
You can use savgol_coeffs to generate b by iterating over the pos argument. The following ipython session shows an example.
In [74]: import numpy as np
In [75]: from scipy.signal import savgol_coeffs
In [76]: np.set_printoptions(precision=11, linewidth=90)
In [77]: order = 3
In [78]: windowlen = 5
These are the coefficients of the symmetric (i.e. centered) Savitzky-Golay filter. The 1-d array should match the center row of the matrix returned by sgolay:
In [79]: savgol_coeffs(windowlen, order)
Out[79]: array([-0.08571428571, 0.34285714286, 0.48571428571, 0.34285714286, -0.08571428571])
If we set pos=windowlen-1, we get coefficients designed for evaluating the filter at one end of the window. These should match the first row of the array returned by sgolay:
In [80]: savgol_coeffs(windowlen, order, pos=windowlen-1)
Out[80]: array([ 0.98571428571, 0.05714285714, -0.08571428571, 0.05714285714, -0.01428571429])
Similarly, pos=0 gives the coefficients for the other end of the window. These should match the last row of the matrix returned by sgolay:
In [81]: savgol_coeffs(windowlen, order, pos=0)
Out[81]: array([-0.01428571429, 0.05714285714, -0.08571428571, 0.05714285714, 0.98571428571])
Here's the full array to match the return value of Matlab's sgolay:
In [82]: b = np.array([savgol_coeffs(windowlen, order, pos=p) for p in range(windowlen-1, -1, -1)])
In [83]: b
Out[83]:
array([[ 0.98571428571, 0.05714285714, -0.08571428571, 0.05714285714, -0.01428571429],
[ 0.05714285714, 0.77142857143, 0.34285714286, -0.22857142857, 0.05714285714],
[-0.08571428571, 0.34285714286, 0.48571428571, 0.34285714286, -0.08571428571],
[ 0.05714285714, -0.22857142857, 0.34285714286, 0.77142857143, 0.05714285714],
[-0.01428571429, 0.05714285714, -0.08571428571, 0.05714285714, 0.98571428571]])
If you compare this to the result of b = sgolay(3, 5) in Matlab, you'll see that they are the same.
To get the g matrix returned by sgolay, you'll have to call savgol_coeffs with deriv set to the values in range(order+1), reverse and transpose the array, and scale by the factorial of the derivative order. To reverse the coefficients, you could use a slice of the form ::-1, or you can use the use option of savgol_coeffs.
Here's one way to use savgol_coeffs to generate the g matrix with order=3 and windowlen=5:
In [12]: import numpy as np
In [13]: from scipy.signal import savgol_coeffs
In [14]: from scipy.special import factorial
In [15]: np.set_printoptions(precision=11, linewidth=90, suppress=True)
In [16]: order = 3
In [17]: windowlen = 5
In [18]: g = np.array([savgol_coeffs(windowlen, order, deriv=d, use='dot') for d in range(order+1)]).T / factorial(np.arange(order+1))
In [19]: g
Out[19]:
array([[-0.08571428571, 0.08333333333, 0.14285714286, -0.08333333333],
[ 0.34285714286, -0.66666666667, -0.07142857143, 0.16666666667],
[ 0.48571428571, 0. , -0.14285714286, 0. ],
[ 0.34285714286, 0.66666666667, -0.07142857143, -0.16666666667],
[-0.08571428571, -0.08333333333, 0.14285714286, 0.08333333333]])
You don't say why you want the full windowlen x windowlen array in Python. You don't need it to use savgol_filter.

Related

Compute KL divergence between rows of a matrix and a vector

I have a matrix (numpy 2d array) in which each row is a valid probability distribution. I have another vector (numpy 1d array), again a prob dist. I need to compute KL divergence between each row of the matrix and the vector. Is it possible to do this without using for loops?
This question asks the same thing, but none of the answers solve my problem. One of them suggests to use for loop which I want to avoid since I have large data. Another answer provides a solution in tensorflow, but I want for numpy arrays.
scipy.stats.entropy computes KL divergence between 2 vectors, but I couldn't get how to use it when one of them is a matrix.
The function scipy.stats.entropy can, in fact, do the vectorized calculation, but you have to reshape the arguments appropriately for it to work. When the inputs are two-dimensional arrays, entropy expects the columns to hold the probability vectors. In the case where p is two-dimensional and q is one-dimensional, a trivial dimension must be added to q to make the arguments compatible for broadcasting.
Here's an example. First, the imports:
In [10]: import numpy as np
In [11]: from scipy.stats import entropy
Create a two-dimensional p whose rows are the probability vectors, and a one-dimensional probability vector q:
In [12]: np.random.seed(8675309)
In [13]: p = np.random.rand(3, 5)
In [14]: p /= p.sum(axis=1, keepdims=True)
In [15]: q = np.random.rand(5)
In [16]: q /= q.sum()
In [17]: p
Out[17]:
array([[0.32085531, 0.29660176, 0.14113073, 0.07988999, 0.1615222 ],
[0.05870513, 0.15367858, 0.29585406, 0.01298657, 0.47877566],
[0.1914319 , 0.29324935, 0.1093297 , 0.17710131, 0.22888774]])
In [18]: q
Out[18]: array([0.06804561, 0.35392387, 0.29008139, 0.04580467, 0.24214446])
For comparison with the vectorized result, here's the result computed using a Python loop.
In [19]: [entropy(t, q) for t in p]
Out[19]: [0.32253909299531597, 0.17897138916539493, 0.2627905326857023]
To make entropy do the vectorized calculation, the columns of the first argument must be the probability vectors, so we'll transpose p. Then, to make q compatible with p.T, we'll reshape it into a two-dimensional array with shape (5, 1) (i.e. it contains a single column):
In [20]: entropy(p.T, q.reshape(-1, 1))
Out[20]: array([0.32253909, 0.17897139, 0.26279053])
Note: It is tempting to use q.T as the second argument, but that won't work. In NumPy, the transpose operation only swaps the lengths of existing dimensions--it never creates new dimensions. So the transpose of a one-dimensional array is itself. That is, q.T is the same shape as q.
Older version of this answer follows...
You can use scipy.special.kl_div or scipy.special.rel_entr to do this. Here's an example.
In [17]: import numpy as np
...: from scipy.stats import entropy
...: from scipy.special import kl_div, rel_entr
Make p and q for the example.
p has shape (3, 5); the rows are the probability distributions. q is a 1-d array with length 5.
In [18]: np.random.seed(8675309)
...: p = np.random.rand(3, 5)
...: p /= p.sum(axis=1, keepdims=True)
...: q = np.random.rand(5)
...: q /= q.sum()
This is the calculation that you want, using a Python loop and scipy.stats.entropy. I include this here so the result can be compared to the vectorized calculation below.
In [19]: [entropy(t, q) for t in p]
Out[19]: [0.32253909299531597, 0.17897138916539493, 0.2627905326857023]
We have constructed p and q so that the probability vectors
each sum to 1. In this case, the above result can also be
computed in a vectorized calculation with scipy.special.rel_entr or scipy.special.kl_div. (I recommend rel_entr. kl_div adds and subtracts additional terms that will ultimately cancel out in the sum, so it does a bit more work than necessary.)
These functions compute only the point-wise part of the calculations;
you have to sum the result to get the actual entropy or divergence.
In [20]: rel_entr(p, q).sum(axis=1)
Out[20]: array([0.32253909, 0.17897139, 0.26279053])
In [21]: kl_div(p, q).sum(axis=1)
Out[21]: array([0.32253909, 0.17897139, 0.26279053])

Looking for the "right" way to do a series of matrix-vector products in numpy

I have a 3D numpy array vecs. vecs has shape [M,N,3]. That is to say, vecs is an MxN collection of 3-element vectors. I am looking for a pythonic (numpythonic?) way to take the matrix product of each of those vectors with a single 3x3 matrix mat. In other words, I want a clean way to do this:
>>> for k in range(vecs.shape[0]):
>>> for j in range(vecs.shape[1]):
>>> vecs[k,j] = np.dot(mat, vecs[k,j])
Any way to do this?
Your dot, can be expressed with einsum as:
res[k,j,:] = np.einsum('ab,b->a',mat,vecs[k,j,:])
and generalized to work with the whole array as
res = np.einsum('ab,kjb->kja',mat,vecs)
In this particular case I think you can just do
np.dot(vecs,mat.T)
Here is a short snippet of code demonstrating that they are the same:
In [1]: import numpy as np
In [2]: a = np.random.randn(100,100,3)
In [3]: b = np.random.randn(3,3)
In [4]: expected = np.zeros_like(a)
In [5]: for i in range(a.shape[0]):
...: for j in range(a.shape[1]):
...: expected[i,j] = np.dot(b,a[i,j])
...:
In [6]: np.allclose(expected,np.dot(a,b.T))
Out[6]: True
You can use np.tensordot
vecs = np.tensordot(mat, vecs.T, axes=1).T
Here you tranpose your vecs to get (3, M, N) array in order to
apply the dot product with mat and then transpose the resulting (3, N, M) back into (M, N, 3) array.
Regarding the axes argument:
If an int N, sum over the last N axes of a and the first N axes of b
in order. The sizes of the corresponding axes must match.
So, in your case you sum along the second axis of mat with the first axis of vecs.T

Masking a 2D array and operating on second array based off masked indices

I have a function that reads in and outputs a 2D array. I want the output to be constant (pi in this case) for every index in the input that equals 0, otherwise I perform some maths on it. E.g:
import numpy as np
import numpy.ma as ma
def my_func(x):
mask = ma.where(x==0,x)
# make an array of pi's the same size and shape as the input
y = np.pi * np.ones(x)
# psuedo-code bit I can't figure out
y.not_masked = y**2
return y
my_array = [[0,1,2],[1,0,2],[1,2,0]]
result_array = my_func(my_array)
This should give me the following:
result_array = [[3.14, 1, 4],[1, 3.14, 4], [1, 4, 3.14]]
I.e. it has applied y**2 to each element in the 2D list that doesn't equal zero, and replaced all the zeros with pi.
I need this because my function will include division, and I don't know the indexes beforehand. I'm trying to convert a matlab tutorial from a textbook into Python and this function is stumping me!
Thanks
Just use np.where() directly:
y = np.where(x, x**2, np.pi)
Example:
>>> x = np.asarray([[0,1,2],[1,0,2],[1,2,0]])
>>> y = np.where(x, x**2, np.pi)
>>> print(y)
[[ 3.14159265 1. 4. ]
[ 1. 3.14159265 4. ]
[ 1. 4. 3.14159265]]
Try this:
my_array = np.array([[0,1,2],[1,0,2],[1,2,0]]).astype(float)
def my_func(x):
mask = x == 0
x[mask] = np.pi
x[~mask] = x[~mask]**2 # or some other operation on x...
return x
I would suggest rather than using masks you can use a boolean array to achieve what you want.
def my_func(x):
#create a boolean matrix, a, that has True where x==0 and
#False where x!=0
a=x==0
x[a]=np.pi
#Use np.invert to flip where a is True and False so we can
#operate on the non-zero values of the array
x[~a]=x[~a]**2
return x #return the transformed array
my_array = np.array([[0.,1.,2.],[1.,0.,2.],[1.,2.,0.]])
result_array = my_func(my_array)
this gives the output:
array([[ 3.14159265, 1. , 4. ],
[ 1. , 3.14159265, 4. ],
[ 1. , 4. , 3.14159265]])
Notice that I passed to the function an numpy array specifically, originally you passed a list and that will give problems when you attempt to do mathematical operations. Also notice I defined the array with 1. rather than just 1, in order to make sure it was an array of floats rather than integers, because if it is an array of integers when you set values equal to pi it will truncate to 3.
Perhaps it would be good to add a piece to the function to check the dtype of the input argument and see if it is a numpy array rather than a list or other object, and also to make sure it contains floats, and if not you can adjust accordingly.
EDIT:
Change to using ~a rather than invert(a) as per Scotty1's suggestion.

Numpy Broadcast to perform euclidean distance vectorized

I have matrices that are 2 x 4 and 3 x 4. I want to find the euclidean distance across rows, and get a 2 x 3 matrix at the end. Here is the code with one for loop that computes the euclidean distance for every row vector in a against all b row vectors. How do I do the same without using for loops?
import numpy as np
a = np.array([[1,1,1,1],[2,2,2,2]])
b = np.array([[1,2,3,4],[1,1,1,1],[1,2,1,9]])
dists = np.zeros((2, 3))
for i in range(2):
dists[i] = np.sqrt(np.sum(np.square(a[i] - b), axis=1))
Here are the original input variables:
A = np.array([[1,1,1,1],[2,2,2,2]])
B = np.array([[1,2,3,4],[1,1,1,1],[1,2,1,9]])
A
# array([[1, 1, 1, 1],
# [2, 2, 2, 2]])
B
# array([[1, 2, 3, 4],
# [1, 1, 1, 1],
# [1, 2, 1, 9]])
A is a 2x4 array.
B is a 3x4 array.
We want to compute the Euclidean distance matrix operation in one entirely vectorized operation, where dist[i,j] contains the distance between the ith instance in A and jth instance in B. So dist is 2x3 in this example.
The distance
could ostensibly be written with numpy as
dist = np.sqrt(np.sum(np.square(A-B))) # DOES NOT WORK
# Traceback (most recent call last):
# File "<stdin>", line 1, in <module>
# ValueError: operands could not be broadcast together with shapes (2,4) (3,4)
However, as shown above, the problem is that the element-wise subtraction operation A-B involves incompatible array sizes, specifically the 2 and 3 in the first dimension.
A has dimensions 2 x 4
B has dimensions 3 x 4
In order to do element-wise subtraction, we have to pad either A or B to satisfy numpy's broadcast rules. I'll choose to pad A with an extra dimension so that it becomes 2 x 1 x 4, which allows the arrays' dimensions to line up for broadcasting. For more on numpy broadcasting, see the tutorial in the scipy manual and the final example in this tutorial.
You can perform the padding with either np.newaxis value or with the np.reshape command. I show both below:
# First approach is to add the extra dimension to A with np.newaxis
A[:,np.newaxis,:] has dimensions 2 x 1 x 4
B has dimensions 3 x 4
# Second approach is to reshape A with np.reshape
np.reshape(A, (2,1,4)) has dimensions 2 x 1 x 4
B has dimensions 3 x 4
As you can see, using either approach will allow the dimensions to line up. I'll use the first approach with np.newaxis. So now, this will work to create A-B, which is a 2x3x4 array:
diff = A[:,np.newaxis,:] - B
# Alternative approach:
# diff = np.reshape(A, (2,1,4)) - B
diff.shape
# (2, 3, 4)
Now we can put that difference expression into the dist equation statement to get the final result:
dist = np.sqrt(np.sum(np.square(A[:,np.newaxis,:] - B), axis=2))
dist
# array([[ 3.74165739, 0. , 8.06225775],
# [ 2.44948974, 2. , 7.14142843]])
Note that the sum is over axis=2, which means take the sum over the 2x3x4 array's third axis (where the axis id starts with 0).
If your arrays are small, then the above command will work just fine. However, if you have large arrays, then you may run into memory issues. Note that in the above example, numpy internally created a 2x3x4 array to perform the broadcasting. If we generalize A to have dimensions a x z and B to have dimensions b x z, then numpy will internally create an a x b x z array for broadcasting.
We can avoid creating this intermediate array by doing some mathematical manipulation. Because you are computing the Euclidean distance as a sum-of-squared-differences, we can take advantage of the mathematical fact that sum-of-squared-differences can be rewritten.
Note that the middle term involves the sum over element-wise multiplication. This sum over multiplcations is better known as a dot product. Because A and B are each a matrix, then this operation is actually a matrix multiplication. We can thus rewrite the above as:
We can then write the following numpy code:
threeSums = np.sum(np.square(A)[:,np.newaxis,:], axis=2) - 2 * A.dot(B.T) + np.sum(np.square(B), axis=1)
dist = np.sqrt(threeSums)
dist
# array([[ 3.74165739, 0. , 8.06225775],
# [ 2.44948974, 2. , 7.14142843]])
Note that the answer above is exactly the same as the previous implementation. Again, the advantage here is the we do not need to create the intermediate 2x3x4 array for broadcasting.
For completeness, let's double-check that the dimensions of each summand in threeSums allowed broadcasting.
np.sum(np.square(A)[:,np.newaxis,:], axis=2) has dimensions 2 x 1
2 * A.dot(B.T) has dimensions 2 x 3
np.sum(np.square(B), axis=1) has dimensions 1 x 3
So, as expected, the final dist array has dimensions 2x3.
This use of the dot product in lieu of sum of element-wise multiplication is also discussed in this tutorial.
I had the same problem recently working with deep learning(stanford cs231n,Assignment1),but when I used
np.sqrt((np.square(a[:,np.newaxis]-b).sum(axis=2)))
There was a error
MemoryError
That means I ran out of memory(In fact,that produced a array of 500*5000*1024 in the middle.It's so huge!)
To prevent that error,we can use a formula to simplify:
code:
import numpy as np
aSumSquare = np.sum(np.square(a),axis=1);
bSumSquare = np.sum(np.square(b),axis=1);
mul = np.dot(a,b.T);
dists = np.sqrt(aSumSquare[:,np.newaxis]+bSumSquare-2*mul)
Simply use np.newaxis at the right place:
np.sqrt((np.square(a[:,np.newaxis]-b).sum(axis=2)))
This functionality is already included in scipy's spatial module and I recommend using it as it will be vectorized and highly optimized under the hood. But, as evident by the other answer, there are ways you can do this yourself.
import numpy as np
a = np.array([[1,1,1,1],[2,2,2,2]])
b = np.array([[1,2,3,4],[1,1,1,1],[1,2,1,9]])
np.sqrt((np.square(a[:,np.newaxis]-b).sum(axis=2)))
# array([[ 3.74165739, 0. , 8.06225775],
# [ 2.44948974, 2. , 7.14142843]])
from scipy.spatial.distance import cdist
cdist(a,b)
# array([[ 3.74165739, 0. , 8.06225775],
# [ 2.44948974, 2. , 7.14142843]])
Using numpy.linalg.norm also works well with broadcasting. Specifying an integer value for axis will use a vector norm, which defaults to Euclidean norm.
import numpy as np
a = np.array([[1,1,1,1],[2,2,2,2]])
b = np.array([[1,2,3,4],[1,1,1,1],[1,2,1,9]])
np.linalg.norm(a[:, np.newaxis] - b, axis = 2)
# array([[ 3.74165739, 0. , 8.06225775],
# [ 2.44948974, 2. , 7.14142843]])

Left Matrix Division and Numpy Solve

I am trying to convert code that contains the \ operator from Matlab (Octave) to Python. Sample code
B = [2;4]
b = [4;4]
B \ b
This works and produces 1.2 as an answer. Using this web page
http://mathesaurus.sourceforge.net/matlab-numpy.html
I translated that as:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print lin.solve(B,b)
This gave me an error:
numpy.linalg.linalg.LinAlgError: Array must be square
How come Matlab \ works with non square matrix for B?
Any solutions for this?
From MathWorks documentation for left matrix division:
If A is an m-by-n matrix with m ~= n and B is a column vector with m
components, or a matrix with several such columns, then X = A\B is the
solution in the least squares sense to the under- or overdetermined
system of equations AX = B. In other words, X minimizes norm(A*X - B),
the length of the vector AX - B.
The equivalent in numpy is np.linalg.lstsq:
In [15]: B = np.array([[2],[4]])
In [16]: b = np.array([[4],[4]])
In [18]: x,resid,rank,s = np.linalg.lstsq(B,b)
In [19]: x
Out[19]: array([[ 1.2]])
Matlab will actually do a number of different operations when the \ operator is used, depending on the shape of the matrices involved (see here for more details). In you example, Matlab is returning a least squares solution, rather than solving the linear equation directly, as would happen with a square matrix. To get the same behaviour in numpy, do this:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
print np.linalg.lstsq(B,b)[0]
which should give you the same solution as Matlab.
You can form the left inverse:
import numpy as np
import numpy.linalg as lin
B = np.array([[2],[4]])
b = np.array([[4],[4]])
B_linv = lin.solve(B.T.dot(B), B.T)
c = B_linv.dot(b)
print('c\n', c)
Result:
c
[[ 1.2]]
Actually, we can simply run the solver once, without forming an inverse, like this:
c = lin.solve(B.T.dot(B), B.T.dot(b))
print('c\n', c)
Result:
c
[[ 1.2]]
.... as before
Why? Because:
We have:
Multiply through by B.T, gives us:
Now, B.T.dot(B) is square, full rank, does have an inverse. And therefore we can multiply through by the inverse of B.T.dot(B), or use a solver, as above, to get c.

Categories