How to assign values to a Tensor by index like Numpy in python?
In numpy, we can fill values to an array by index:
array = np.zeros((10, 8, 3), dtype=np.float32)
for n in range(10):
for k in range(4):
array[n, k, :] = x, y, -2 # x,y are diffrent values in every loop
array[n, 4 + k, :] = x, y, 0.4
If there is a zeros tensor using torch.zeros, how to fill values to it in Pytorch by the indexes?
Group the values to a tensor and then assign:
import torch
array = torch.zeros((10, 8, 3), dtype=torch.float32)
for n in range(10):
for k in range(4):
x, y = 1, -1
array[n, k, :] = torch.tensor([x, y, -2]) # x,y are diffent values in every loop
array[n, 4 + k, :] = torch.tensor([x, y, 0.4])
Related
input is a list of observations, every observation is fixed size set of elipses (every elipse is represented by 7 parameters).
output is a list of images, one image for one observation, we are basically putting elipses from observation to completely white image. If few elipses overlap then we are putting mean value of rgb values.
n, m = size of image in pixels, image is represented as (n, m, 3) numpy array (3 because of RGB coding)
N = number of elipses in every individual observation
xx, yy = np.mgrid[:n, :m]
def elipses_population_to_img_population(elipses_population):
population_size = elipses_population.shape[0]
img_population = np.empty((population_size, n, m, 3))
for j in range(population_size):
imarray = np.empty((N, n, m, 3))
imarray.fill(np.nan)
for i in range(N):
x = elipses_population[j, i, 0]
y = elipses_population[j, i, 1]
R = elipses_population[j, i, 2]
G = elipses_population[j, i, 3]
B = elipses_population[j, i, 4]
a = elipses_population[j, i, 5]
b = elipses_population[j, i, 6]
xx_centered = xx - x
yy_centered = yy - y
elipse = (xx_centered / a)**2 + (yy_centered / b)**2 < 1
imarray[i, elipse, :] = np.array([R, G, B])
means_img = np.nanmean(imarray, axis=0)
means_img = np.nan_to_num(means_img, nan=255)
img_population[j, :, :, :] = means_img
return img_population
Code is working correctly, but i am looking for optimization advices. I am running it many times in my code so every small improve would be helpful.
The goal is to extract a random 2x5 patch from a 5x10 image, and do so randomly for all images in a batch. Looking to write a faster implementation that avoids for loops. Haven't been able to figure out how to use the torch .gather operation with two index arrays (idx_h and idx_w in code example).
Naive for loop:
import torch
b = 3 # batch size
h = 5 # height
w = 10 # width
crop_border = (3, 5) # number of pixels (height, width) to crop
x = torch.arange(b * h * w).reshape(b, h, w)
print(x)
dh_ = torch.randint(0, crop_border[0], size=(b,))
dw_ = torch.randint(0, crop_border[1], size=(b,))
_dh = h - (crop_border[0] - dh_)
_dw = w - (crop_border[1] - dw_)
idx_h = torch.stack([torch.arange(d_, _d) for d_, _d in zip(dh_, _dh)])
idx_w = torch.stack([torch.arange(d_, _d) for d_, _d in zip(dw_, _dw)])
print(idx_h, idx_w)
new_shape = (b, idx_h.shape[1], idx_w.shape[1])
cropped_x = torch.empty(new_shape)
for batch in range(b):
for height in range(idx_h.shape[1]):
for width in range(idx_w.shape[1]):
cropped_x[batch, height, width] = x[
batch, idx_h[batch, height], idx_w[batch, width]
]
print(cropped_x)
Index arrays needed to be repeated and reshaped to work with gather operation. Fast_crop code based pytorch discussion: https://discuss.pytorch.org/t/similar-to-torch-gather-over-two-dimensions/118827
def fast_crop(x, idx1, idx2):
"""
Compute
x: N x B x V
idx1: N x K matrix where idx1[i, j] is between [0, B)
idx2: N x K matrix where idx2[i, j] is between [0, V)
Return:
cropped: N x K matrix where y[i, j] = x[i, idx1[i,j], idx2[i,j]]
"""
x = x.contiguous()
assert idx1.shape == idx2.shape
lin_idx = idx2 + x.size(-1) * idx1
x = x.view(-1, x.size(1) * x.size(2))
lin_idx = lin_idx.view(-1, lin_idx.shape[1] * lin_idx.shape[2])
cropped = x.gather(-1, lin_idx)
return cropped.reshape(idx1.shape)
idx1 = torch.repeat_interleave(idx_h, idx_w.shape[1]).reshape(new_shape)
idx2 = torch.repeat_interleave(idx_w, idx_h.shape[1], dim=0).reshape(new_shape)
cropped = fast_crop(x, idx1, idx2)
(cropped == cropped_x).all()
Using realistic numbers for b = 100, h = 100, w = 130 and crop_border = (40, 95), a 10 trial run takes the for loop 32s while fast_crop only 0.043s.
I have the following code which creates a 4D grid matrix and I am looking to insert the rolled 2D vals matrix into this grid.
import numpy as np
k = 100
x = 20
y = 10
z = 3
grid = np.zeros((y, k, x, z))
insert_map = np.random.randint(low=0, high=y, size=(5, k, x))
vals = np.random.random((5, k))
for i in range(x):
grid[insert_map[:, :, i], i, 0] = np.roll(vals, i)
If vals would be a 1D array and I would use a 1D insert_map array as a reference it would work, however using it in multiple dimensions seems to be an issue and it raises error:
ValueError: shape mismatch: value array of shape (5,100) could not be broadcast to indexing result of shape (5,100,3)
I'm confused as to why it's saying that error as grid[insert_map[:, :, i], i, 0] should in my mind give a (5, 100) insert location for the y and k portion of the grid array and then fixes the x and z portion with i and 0?
Is there any way to insert the 2D (5, 100) rolled vals matrix into the 4D (10, 100, 20, 3) grid matrix by 2D indexing?
grid is (y, k, x, z)
insert_map is (5, k, x). insert_map[:, :, i] is then (5,k).
grid[insert_map[:, :, i], i, 0] will then be (5,k,z). insert_map[] only indexes the first y dimension.
vals is (5,k); roll doesn't change that.
np.roll(vals, i)[...,None] can broadcast to fill the z dimension, if that's what you want.
Your insert_map can't select values along the k dimension. It's values, created by the randint are valid the y dimension.
If the i and 0 are supposed to apply to the last two dimensions, you still need an index for the k dimension. Possibilities are:
grid[insert_map[:, j, i], j, i, 0]
grid[insert_map[:, :, i], 0, i, 0]
grid[insert_map[:, :, i], :, i, 0]
grid[insert_map[:, :, i], np.arange(k), i, 0]
I have two numpy arrays, x of shape (m, 2) and y of shape (n, 2). I would like compute the (m, n, 2) array where at position (i, j) one finds the sum of x[i] and y[j] at out[i, j]. List comprehension works
import numpy
x = numpy.random.rand(13, 2)
y = numpy.random.rand(5, 2)
xy = numpy.array([
[xx + yy for yy in y]
for xx in x
])
but I was wondering if there is a more efficient solution via numpy.add.outer or something along those lines.
You can use numpys broadcasting rules to cast the first array to the shape (13, 1, 2) and the second to the shape (1, 5, 2):
numpy.all(x[:, None, :] + y[None, :, :] == xy)
# True
The array is repeated across the dimension where None is added (since it has length 1).
Therefore the shape of the output becomes (13, 5, 2).
xy = x[:, None]+y
should do the trick.
I have a 3D array (a 2D array of vectors), of which I want to transform each vector with a rotation matrix. The rotations are in two separate 2D arrays of radians angle values called cols and rows.
I've been able to have NumPy compute the angles for me already, without a Python loop. Now I'm looking for a way to have NumPy generate the rotation matrices, too, hopefully resulting in a great performance boost.
size = img.shape[:2]
# Create an array that assigns each pixel the percentage of
# the correction (value between -1 and 1, distributed linearly).
cols = np.array([np.arange(size[1]) for __ in range(size[0])]) / (size[1] - 1) * 2 - 1
rows = np.array([np.arange(size[0]) for __ in range(size[1])]).T / (size[0] - 1) * 2 - 1
# Atan distribution based on F-number and Sensor size.
cols = np.arctan(sh * cols / (2 * f))
rows = np.arctan(sv * rows / (2 * f))
### This is the loop that I would like to remove and find a
### clever way to make NumPy do the same operation natively.
for i in range(size[0]):
for j in range(size[1]):
ah = cols[i,j]
av = rows[i,j]
# Y-rotation.
mat = np.matrix([
[ np.cos(ah), 0, np.sin(ah)],
[0, 1, 0],
[-np.sin(ah), 0, np.cos(ah)]
])
# X-rotation.
mat *= np.matrix([
[1, 0, 0],
[0, np.cos(av), -np.sin(av)],
[0, np.sin(av), np.cos(av)]
])
img[i,j] = img[i,j] * mat
return img
Is there any clever way to rewrite the loop in NumPy operations?
(Let's assume the shape of img be (a, b, 3).)
Firstly, cols and rows does not need to be fully expanded to (a, b) (you could write cols[j] instead of cols[i,j]). And they can be easy generated using np.linspace:
cols = np.linspace(-1, 1, size[1]) # shape: (b,)
rows = np.linspace(-1, 1, size[0]) # shape: (a,)
cols = np.arctan(sh * cols / (2*f))
rows = np.arctan(sv * rows / (2*f))
Then we get precalculate the components of the matrices.
# shape: (b,)
cos_ah = np.cos(cols)
sin_ah = np.sin(cols)
zeros_ah = np.zeros_like(cols)
ones_ah = np.ones_like(cols)
# shape: (a,)
cos_av = np.cos(rows)
sin_av = np.sin(rows)
zeros_av = np.zeros_like(rows)
ones_av = np.ones_like(rows)
And then construct the rotation matrices:
# shape: (3, 3, b)
y_mat = np.array([
[cos_ah, zeros_ah, sin_ah],
[zeros_ah, ones_ah, zeros_ah],
[-sin_ah, zeros_ah, cos_ah],
])
# shape: (3, 3, a)
x_mat = np.array([
[ones_av, zeros_av, zeros_av],
[zeros_av, cos_av, -sin_av],
[zeros_av, sin_av, cos_av],
])
Now let's see. If we have a loop we would write:
for i in range(size[0]):
for j in range(size[1]):
img[i, j, :] = img[i, j, :] # y_mat[:, :, j] # x_mat[:, :, i]
or, if we expand out the matrix multiplications:
This can be handled nicely using np.einsum (note the i,j,k,m,n corresponds exactly like the equation above):
img = np.einsum('ijk,kmj,mni->ijn', img, y_mat, x_mat)
To summarize:
size = img.shape[:2]
cols = np.linspace(-1, 1, size[1]) # shape: (b,)
rows = np.linspace(-1, 1, size[0]) # shape: (a,)
cols = np.arctan(sh * cols / (2*f))
rows = np.arctan(sv * rows / (2*f))
cos_ah = np.cos(cols)
sin_ah = np.sin(cols)
zeros_ah = np.zeros_like(cols)
ones_ah = np.ones_like(cols)
cos_av = np.cos(rows)
sin_av = np.sin(rows)
zeros_av = np.zeros_like(rows)
ones_av = np.ones_like(rows)
y_mat = np.array([
[cos_ah, zeros_ah, sin_ah],
[zeros_ah, ones_ah, zeros_ah],
[-sin_ah, zeros_ah, cos_ah],
])
x_mat = np.array([
[ones_av, zeros_av, zeros_av],
[zeros_av, cos_av, -sin_av],
[zeros_av, sin_av, cos_av],
])
return np.einsum('ijk,kmj,mni->ijn', img, y_mat, x_mat)