Avoid using for loop. Python 3 - python

I have an array of shape (3,2):
import numpy as np
arr = np.array([[0.,0.],[0.25,-0.125],[0.5,-0.125]])
I was trying to build a matrix (matrix) of dimensions (6,2), with the results of the outer product of the elements i,i of arr and arr.T. At the moment I am using a for loop such as:
size = np.shape(arr)
matrix = np.zeros((size[0]*size[1],size[1]))
for i in range(np.shape(arr)[0]):
prod = np.outer(arr[i],arr[i].T)
matrix[size[1]*i:size[1]+size[1]*i,:] = prod
Resulting:
matrix =array([[ 0. , 0. ],
[ 0. , 0. ],
[ 0.0625 , -0.03125 ],
[-0.03125 , 0.015625],
[ 0.25 , -0.0625 ],
[-0.0625 , 0.015625]])
Is there any way to build this matrix without using a for loop (e.g. broadcasting)?

Extend arrays to 3D with None/np.newaxis keeping the first axis aligned, while letting the second axis getting pair-wise multiplied, perform multiplication leveraging broadcasting and reshape to 2D -
matrix = (arr[:,None,:]*arr[:,:,None]).reshape(-1,arr.shape[1])
We can also use np.einsum -
matrix = np.einsum('ij,ik->ijk',arr,arr).reshape(-1,arr.shape[1])
einsum string representation might be more intuitive as it lets us visualize three things :
Axes that are aligned (axis=0 here).
Axes that are getting summed up (none here).
Axes that are kept i.e. element-wise multiplied (axis=1 here).

Related

Re-calculate elements of symmetric matrix using a "i not equal to j" loop in Python

The correlation matrix is a symmetric matrix, meaning that its upper diagonal and lower diagonal elements are mirror images of each other, together called off-diagonal elements (as opposed to the diagonal elements, which are all equal to 1 in any correlation matrix since any variable's correlation with itself is just 1).
The off-diagonal elements of a correlation matrix are the same wherever the i'th row number and j'th column number in the lower diagonal are swapped in the upper diagonal, i.e. correlation of variables 1 and 2 (row 1, column 2) are the same for variables 2 and 1 (row 2, column 1). Therefore, we only need to re-calculate the lower-diagonal elements, and copy them to corresponding positions in the matrix's upper-diagonal after
import numpy as np
from numpy.random import randn
X = randn(20,3)
Rho = np.corrcoef(X.T) #correlation matrix
print(np.tril(Rho)) #lower off-diagonal of matrix Rho to re-calculate, then copy to other side
shows
array([[ 1. , 0. , 0. ],
[-0.03003281, 1. , 0. ],
[-0.02602238, 0.06137713, 1. ]])
What is the most efficient way to code a "i not-equal-to j" loop for the following sequence of steps:
re-calculate the lower off-diagonal elements of the symmetric matrix according to some apply function (to make it simple, we will just add +2 to each of these elements)
flip those same calculations onto its mirror image (the corresponding upper off-diagonals)
Also, replace the diagonal elements of the symmetric matrix with a vector filled with 10's (instead of 1's as found in the correlation matrix)
The aim is to generate a new matrix that is a re-calculation of the original.
Let us generate Rho first (note that I'm initializing the pseudo-random number generator in order to obtain the same Rho in different runs of the code):
In [526]: import numpy as np
In [527]: np.random.seed(0)
...: n = 3
...: X = np.random.randn(20, n)
...: Rho = np.corrcoef(X.T)
In [528]: Rho
Out[528]:
array([[1. , 0.03224462, 0.05021998],
[0.03224462, 1. , 0.15140358],
[0.05021998, 0.15140358, 1. ]])
Then you can use NumPy's tril_indices_from and advanced indexing to generate the new matrix:
In [548]: result = np.zeros_like(Rho)
In [549]: lrows, lcols = np.tril_indices_from(Rho, k=-1)
In [550]: result[lrows, lcols] = Rho[lrows, lcols] + 2
In [551]: result
Out[551]:
array([[0. , 0. , 0. ],
[2.03224462, 0. , 0. ],
[2.05021998, 2.15140358, 0. ]])
In [552]: result[lcols, lrows] = result[lrows, lcols]
In [553]: result
Out[553]:
array([[0. , 2.03224462, 2.05021998],
[2.03224462, 0. , 2.15140358],
[2.05021998, 2.15140358, 0. ]])
In [554]: result[np.arange(n), np.arange(n)] = 10
In [555]: result
Out[555]:
array([[10. , 2.03224462, 2.05021998],
[ 2.03224462, 10. , 2.15140358],
[ 2.05021998, 2.15140358, 10. ]])

How to compute an affine transformation of multiple points?

I have a 3d numpy array of point (484,3,1) and a 2d transformation matrix (3,3). I want to compute the transformation for all 484 points.
I have tried to reshape the arrays and compute the dot product, but I am struggling to get it to output a (484,3,1) shaped array where all the points are transformed.
points = np.random.randint(0, 979, (484,3,1))
transformation = array([[0.94117647, 0. , 0. ],
[0. , 0.94117647, 0. ],
[0. , 0. , 1. ]])
points.shape = (484,3,1)
transformation = (3,3)
transformation.dot(points).shape = (3,484,1)
I would like this to be as optimized as possible. Any advice would be greatly appreciated.
Just do a reshape to (484,3) dimensions and use the np.matmul (np.dot is also possible but since you are looking for a matrix multiplication matmul is prefered according to the documentation) product
np.matmul(points.reshape(484,-1), transformation).reshape(484,3,-1)
resulting shape is the same of course given by the last reshaping: (484,3,1)

How to blur 3D array of points, while maintaining their original values? (Python)

I have a sparse 3D array of values. I am trying to turn each "point" into a fuzzy "sphere", by applying a Gaussian filter to the array.
I would like the original value at the point (x,y,z) to remain the same. I just want to create falloff values around this point... But applying the Gaussian filter changes the original (x,y,z) value as well.
I am currently doing this:
dataCube = scipy.ndimage.filters.gaussian_filter(dataCube, 3, truncate=8)
Is there a way for me to normalize this, or do something so that my original values are still in this new dataCube? I am not necessarily tied to using a Gaussian filter, if that is not the best approach.
You can do this using a convolution with a kernel that has 1 as its central value, and a width smaller than the spacing between your data points.
1-d example:
import numpy as np
import scipy.signal
data = np.array([0,0,0,0,0,5,0,0,0,0,0])
kernel = np.array([0.5,1,0.5])
scipy.signal.convolve(data, kernel, mode="same")
gives
array([ 0. , 0. , 0. , 0. , 2.5, 5. , 2.5, 0. , 0. , 0. , 0. ])
Note that fftconvolve might be much faster for large arrays. You also have to specify what should happen at the boundaries of your array.
Update: 3-d example
import numpy as np
from scipy import signal
# first build the smoothing kernel
sigma = 1.0 # width of kernel
x = np.arange(-3,4,1) # coordinate arrays -- make sure they contain 0!
y = np.arange(-3,4,1)
z = np.arange(-3,4,1)
xx, yy, zz = np.meshgrid(x,y,z)
kernel = np.exp(-(xx**2 + yy**2 + zz**2)/(2*sigma**2))
# apply to sample data
data = np.zeros((11,11,11))
data[5,5,5] = 5.
filtered = signal.convolve(data, kernel, mode="same")
# check output
print filtered[:,5,5]
gives
[ 0. 0. 0.05554498 0.67667642 3.0326533 5. 3.0326533
0.67667642 0.05554498 0. 0. ]

Reshape numpy (n,) vector to (n,1) vector

So it is easier for me to think about vectors as column vectors when I need to do some linear algebra. Thus I prefer shapes like (n,1).
Is there significant memory usage difference between shapes (n,) and (n,1)?
What is preferred way?
And how to reshape (n,) vector into (n,1) vector. Somehow b.reshape((n,1)) doesn't do the trick.
a = np.random.random((10,1))
b = np.ones((10,))
b.reshape((10,1))
print(a)
print(b)
[[ 0.76336295]
[ 0.71643237]
[ 0.37312894]
[ 0.33668241]
[ 0.55551975]
[ 0.20055153]
[ 0.01636735]
[ 0.5724694 ]
[ 0.96887004]
[ 0.58609882]]
[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
More simpler way with python syntax sugar is to use
b.reshape(-1,1)
where the system will automatically compute the correct shape instead "-1"
ndarray.reshape() returns a new view, or a copy (depends on the new shape). It does not modify the array in place.
b.reshape((10, 1))
as such is effectively no-operation, since the created view/copy is not assigned to anything. The "fix" is simple:
b_new = b.reshape((10, 1))
The amount of memory used should not differ at all between the 2 shapes. Numpy arrays use the concept of strides and so the dimensions (10,) and (10, 1) can both use the same buffer; the amounts to jump to next row and column just change.

numpy get std between datasets

I have a dataset array A. A is nĂ—2. It can be plotted on the x and y axis.
A[:,1] gets me all of the Y values ans A[:,0] gets me all the x values.
Now, I have a few other dataset arrays that are similar to A. X values are the same for these similar arrays. How do I calculate the standard deviation of the datasets? There should be a std value for each X. In the end my result std should have a length of n.
I can do this the manual way with loops but I'm not sure how to do this using NumPy in a pythonic and simple manner.
here are some sample data:
A=[[0,2.54],[1,254.5],[2,-43]]
B=[[0,3.34],[1,154.5],[2,-93]]
std_Array=[std(2.54,3.54),std(254.5,154.5),std(-43,-93)]
Suppose your arrays are all the same shape and they are in a list. Then to get the standard deviation of the first column of each you can do
arrays = [np.random.rand(10, 2) for _ in range(8)]
np.dstack(arrays).std(axis=0)[0]
This stacks the 2-D arrays into a 3-D array an then takes the std along the first axis, giving a 2 X 8 (the number of arrays). The first row of the result is the std. devs. of the 8 sets of x-values.
If you post some sample data perhaps we could help more.
Is this pythonic enough?
std_Array = numpy.std((A,B), axis = 0)[:,1]
li_arr = [np.array(x)[: , 1] for x in [A , B]]
This will produce numpy arrays with specifi columns you want to add the result will be
[array([ 2.54, 254.5 , -43. ]), array([ 3.34, 154.5 , -93. ])]
then you stack the values using column_stack
arr = np.column_stack(li_arr)
this will be the result stacking
array([[ 2.54, 3.34],
[ 254.5 , 154.5 ],
[ -43. , -93. ]])
and then finally
np.std(arr , axis = 1)

Categories