Numpy vectorize sum over indices - python

I have a list of indices (list(int)) and a list of summing indices (list(list(int)). Given a 2D numpy array, I need to find the sum over indices in the second list for each column and add them to the corresponding indices in the first column. Is there any way to vectorize this?
Here is the normal code:
indices = [1,0,2]
summing_indices = [[5,6,7],[6,7,8],[4,5]]
matrix = np.arange(9*3).reshape((9,3))
for c,i in enumerate(indices):
matrix[i,c] = matrix[summing_indices[i],c].sum()+matrix[i,c]

Here's an almost* vectorized approach using np.add.reduceat -
lens = np.array(map(len,summing_indices))
col = np.repeat(indices,lens)
row = np.concatenate(summing_indices)
vals = matrix[row,col]
addvals = np.add.reduceat(vals,np.append(0,lens.cumsum()[:-1]))
matrix[indices,np.arange(len(indices))] += addvals[indices.argsort()]
Please note that this has some setup overhead, so it would be best suited for 2D input arrays with a good number of columns as we are iterating along the columns.
*: Almost because of the use of map() at the start, but computationally that should be negligible.

Related

Numpy setting every column in a matrix to a certain value matching a condition

I have a matrix D and sort every row with the indicies (argsort). I'm trying to set values of some_matrix at indicies 1-5 in np.argsort(D) to 1. What I have below does what I need, but is there a way to do this in one line with numpy arrays?
some_matrix = np.zeros((n,n))
for i in range(n):
some_matrix[i,np.argsort(D)[i,1:5]] = 1
Firstly, note that you don't need a full sort, only a partition of elements 1-4 (I assume you need elements 1,2,3,4, because that's what your code does). So let's use that:
#assuming you want indices 1,2,3,4 of the sorted array, in any order
indices = np.argpartition(D, (1, 4), axis=1)[:, 1:5]
Now we've got indices of D with the first, second, third and fourth smallest elements (this is similar to indices = np.argsort(D, 1)[:, 1:5], but will be faster for large arrays). All we need is to set these elements to 1
np.put_along_axis(some_matrix, indices, 1, axis=1)

Appending numpy array of arrays

I am trying to append an array to another array but its appending them as if it was just one array. What I would like to have is have each array appended on its own index, (withoug having to use a list, i want to use np arrays) i.e
temp = np.array([])
for i in my_items
m = get_item_ids(i.color) #returns an array as [1,4,20,5,3] (always same number of items but diff ids
temp = np.append(temp, m, axis=0)
On the second iteration lets suppose i get [5,4,15,3,10]
then i would like to have temp as
array([1,4,20,5,3][5,4,15,3,10])
But instead i keep getting [1,4,20,5,3,5,4,15,3,10]
I am new to python but i am sure there is probably a way to concatenate in this way with numpy without using lists?
You have to reshape m in order to have two dimension with
m.reshape(-1, 1)
thus adding the second dimension. Then you could concatenate along axis=1.
np.concatenate(temp, m, axis=1)
List append is much better - faster and easier to use correctly.
temp = []
for i in my_items
m = get_item_ids(i.color) #returns an array as [1,4,20,5,3] (always same number of items but diff ids
temp = m
Look at the list to see what it created. Then make an array from that:
arr = np.array(temp)
# or `np.vstack(temp)

Fastest way to get elements from a numpy array and create a new numpy array

I have numpy array called data of dimensions 150x4
I want to create a new numpy array called mean of dimensions 3x4 by choosing random elements from data.
My current implementation is:
cols = (data.shape[1])
K=3
mean = np.zeros((K,cols))
for row in range(K):
index = np.random.randint(data.shape[0])
for col in range(cols):
mean[row][col] = data[index][col]
Is there a faster way to do the same?
You can specify the number of random integers in numpy.randint (third argument). Also, you should be familiar with numpy.array's index notations. Here, you can access all the elements in one row by : specifier.
mean = data[np.random.randint(0,len(data),3),:]

How to logically combine integer indices in numpy?

Does anyone know how to combine integer indices in numpy? Specifically, I've got the results of a few np.wheres and I would like to extract the elements that are common between them.
For context, I am trying to populate a large 3d array with the number of elements that are between boundary values of each cell, i.e. I have records of individual events including their time, latitude and longitude. I want to grid this into a 3D frequency matrix, where the dimensions are time, lat and lon.
I could loop round the array elements doing an np.where(timeCondition & latCondition & lonCondition), population with the length of the where result, but I figured this would be very inefficient as you would have to repeat a lot of the wheres.
What would be better is to just have a list of wheres for each of the cells in each dimension, and then loop through the logically combining them?
as #ali_m said, use bitwise and should be much faster, but to answer your question:
call ravel_multi_index() to convert the multi-dim index into 1-dim index.
call intersect1d() to get the index that in both condition.
call unravel_index() to convert the 1-dim index back to multi-dim index.
Here is the code:
import numpy as np
a = np.random.rand(10, 20, 30)
idx1 = np.where(a>0.2)
idx2 = np.where(a<0.4)
ridx1 = np.ravel_multi_index(idx1, a.shape)
ridx2 = np.ravel_multi_index(idx2, a.shape)
ridx = np.intersect1d(ridx1, ridx2)
idx = np.unravel_index(ridx, a.shape)
np.allclose(a[idx], a[(a>0.2) & (a<0.4)])
or you can use ridx directly:
a.ravel()[ridx]

Numpy signed maximum magnitude of cumsum along an axis

I have a numpy array a, a.shape=(17,90,144). I want to find the maximum magnitude of each column of cumsum(a, axis=0), but retaining the original sign. In other words, if for a given column a[:,j,i] the largest magnitude of cumsum corresponds to a negative value, I want to retain the minus sign.
The code np.amax(np.abs(a.cumsum(axis=0))) gets me the magnitude, but doesn't retain the sign. Using np.argmax instead will get me the indices I need, which I can then plug into the original cumsum array. But I can't find a good way to do so.
The following code works, but is dirty and really slow:
max_mag_signed = np.zeros((90,144))
indices = np.argmax(np.abs(a.cumsum(axis=0)), axis=0)
for j in range(90):
for i in range(144):
max_mag_signed[j,i] = a.cumsum(axis=0)[indices[j,i],j,i]
There must be a cleaner, faster way to do this. Any ideas?
I can't find any alternatives to argmax but at least you can fasten that with a more vectorized approach:
# store the cumsum, since it's used multiple times
cum_a = a.cumsum(axis=0)
# find the indices as before
indices = np.argmax(abs(cum_a), axis=0)
# construct the indices for the second and third dimensions
y, z = np.indices(indices.shape)
# get the values with np indexing
max_mag_signed = cum_a[indices, y, z]

Categories