Numpy insert matrix values with matrix index - python

I have the following code which creates a 4D grid matrix and I am looking to insert the rolled 2D vals matrix into this grid.
import numpy as np
k = 100
x = 20
y = 10
z = 3
grid = np.zeros((y, k, x, z))
insert_map = np.random.randint(low=0, high=y, size=(5, k, x))
vals = np.random.random((5, k))
for i in range(x):
grid[insert_map[:, :, i], i, 0] = np.roll(vals, i)
If vals would be a 1D array and I would use a 1D insert_map array as a reference it would work, however using it in multiple dimensions seems to be an issue and it raises error:
ValueError: shape mismatch: value array of shape (5,100) could not be broadcast to indexing result of shape (5,100,3)
I'm confused as to why it's saying that error as grid[insert_map[:, :, i], i, 0] should in my mind give a (5, 100) insert location for the y and k portion of the grid array and then fixes the x and z portion with i and 0?
Is there any way to insert the 2D (5, 100) rolled vals matrix into the 4D (10, 100, 20, 3) grid matrix by 2D indexing?

grid is (y, k, x, z)
insert_map is (5, k, x). insert_map[:, :, i] is then (5,k).
grid[insert_map[:, :, i], i, 0] will then be (5,k,z). insert_map[] only indexes the first y dimension.
vals is (5,k); roll doesn't change that.
np.roll(vals, i)[...,None] can broadcast to fill the z dimension, if that's what you want.
Your insert_map can't select values along the k dimension. It's values, created by the randint are valid the y dimension.
If the i and 0 are supposed to apply to the last two dimensions, you still need an index for the k dimension. Possibilities are:
grid[insert_map[:, j, i], j, i, 0]
grid[insert_map[:, :, i], 0, i, 0]
grid[insert_map[:, :, i], :, i, 0]
grid[insert_map[:, :, i], np.arange(k), i, 0]

Related

Pytorch: assigns values to a tensor by index

How to assign values to a Tensor by index like Numpy in python?
In numpy, we can fill values to an array by index:
array = np.zeros((10, 8, 3), dtype=np.float32)
for n in range(10):
for k in range(4):
array[n, k, :] = x, y, -2 # x,y are diffrent values in every loop
array[n, 4 + k, :] = x, y, 0.4
If there is a zeros tensor using torch.zeros, how to fill values to it in Pytorch by the indexes?
Group the values to a tensor and then assign:
import torch
array = torch.zeros((10, 8, 3), dtype=torch.float32)
for n in range(10):
for k in range(4):
x, y = 1, -1
array[n, k, :] = torch.tensor([x, y, -2]) # x,y are diffent values in every loop
array[n, 4 + k, :] = torch.tensor([x, y, 0.4])

Outer sum of two numpy arrays along specified axes

I have two numpy.array objects x and y where x.shape is (P, K) and y.shape is (T, K). I want to do an outer sum on these two objects such that the result has shape (P, T, K). I'm aware of the np.add.outer and the np.einsum functions but I couldn't get them to do what I wanted.
The following gives the intended result.
x_plus_y = np.zeros((P, T, K))
for k in range(K):
x_plus_y[:, :, k] = np.add.outer(x[:, k], y[:, k])
But I've got to imagine there's a faster way!
One option is to add a new dimension to x and add using numpy broadcasting:
out = x[:, None] + y
or as #FirefoxMetzger pointed out, it's more readable to be explicit with the dimensions:
out = x[:, None, :] + y[None, :, :]
Test:
P, K, T = np.random.randint(10,30, size=3)
x = np.random.rand(P, K)
y = np.random.rand(T, K)
x_plus_y = np.zeros((P, T, K))
for k in range(K):
x_plus_y[:, :, k] = np.add.outer(x[:, k], y[:, k])
assert (x_plus_y == x[:, None] + y).all()

Filling a matrix in a numpythonic way

I have a L x L matrix A, which I currently fill in using the following code:
A = np.zeros((L, L))
for J in range(X):
for a in range(L):
for b in range(L):
A[a][b] += alpha[J, a] * O[b, J] * A_old[a, b] * betas[J+2, b]
Where X is an integer defined elsewhere, alpha and betas is of shape (X, L), O is of shape (L, X) and A_old is of shape (L, L). I'm concerned about the speed of this code, and am trying to find a more numpythonic way to approach filling in this matrix. My instinct is to do something like:
for J in range(X):
A += alpha[J, :] * O[:, J] * A_old[:, :] * betas[J+2, :]
But this doesn't broadcast the operations correctly because of the A_old matrix (the resulting shape is right, but the values are not). What's a good way to condense this loop using numpy?

How to multiply N vectors by N matrices using numpy?

I have a matrix M of shape (N, L) and a 3D tensor P of shape (N, L, K). I want to get matrix V of shape (N, K) where V[i] = M[i] # P[i]. I can do it with for loop but that's inefficient, I want to do it with a single or few operations so that it would run in parallel on CUDA.
I tried just multiplying it like so
V = M # P
but that results in a 3D tensor where V[i, j] = M[j] # P[i].
np.diagonal(M # P).T is basically what I want, but calculating it like that wastes a lot of computation.
You could use np.einsum:
>>> M = np.random.rand(5, 2)
>>> P = np.random.rand(5, 2, 3)
>>> V = np.einsum('nl,nlk->nk', M, P)
>>> V.shape
(5, 3)

efficient addition of (m, 2), (n, 2) arrays

I have two numpy arrays, x of shape (m, 2) and y of shape (n, 2). I would like compute the (m, n, 2) array where at position (i, j) one finds the sum of x[i] and y[j] at out[i, j]. List comprehension works
import numpy
x = numpy.random.rand(13, 2)
y = numpy.random.rand(5, 2)
xy = numpy.array([
[xx + yy for yy in y]
for xx in x
])
but I was wondering if there is a more efficient solution via numpy.add.outer or something along those lines.
You can use numpys broadcasting rules to cast the first array to the shape (13, 1, 2) and the second to the shape (1, 5, 2):
numpy.all(x[:, None, :] + y[None, :, :] == xy)
# True
The array is repeated across the dimension where None is added (since it has length 1).
Therefore the shape of the output becomes (13, 5, 2).
xy = x[:, None]+y
should do the trick.

Categories