I have 2 methods: One to convert 4D matrix (tensor) in a matrix and other to convert 2D matrix in 4D.
Reshaping from 4D to 2D work's well, but when I try reconvert again in a tensor, I don't achieve the same order of the elements. The methods are:
# Method to convert the tensor in a matrix
def tensor2matrix(tensor):
# rows, columns, channels and filters
r, c, ch, f = tensor[0].shape
new_dim = [r*c*ch, f] # Inferer the new matrix dims
# Transpose is necesary because the columns are the channels weights
# flattened in columns
return np.reshape(np.transpose(tensor[0], [2,0,1,3]), new_dim)
# Method to convert the matrix in a tensor
def matrix2tensor(matrix, fs):
return np.reshape(matrix, fs, order="F")
I think that the problem is in the np.transpose because when is a matrix only I can permute columns by rows... Is there anyway to back the tensor from the matrix without loops?
Consider the following changes:
Replace the two tensor[0] by tensor, to avoid
ValueError: not enough values to unpack (expected 4, got 3)
when running the example provided below
Ensure both np.reshape calls use the same order="F"
Use another np.transpose call inside matrix2tensor to undo the np.transpose from tensor2matrix
The updated code is
import numpy as np
# Method to convert the tensor in a matrix
def tensor2matrix(tensor):
# rows, columns, channels and filters
r, c, ch, f = tensor.shape
new_dim = [r*c*ch, f] # Inferer the new matrix dims
# Transpose is necesary because the columns are the channels weights
# flattened in columns
return np.reshape(np.transpose(tensor, [2,0,1,3]), new_dim, order="F")
# Method to convert the matrix in a tensor
def matrix2tensor(matrix, fs):
return np.transpose(np.reshape(matrix, fs, order="F"), [1,2,0,3])
and it can be tested like this:
x,y,z,t = 2,3,4,5
shape = (x,y,z,t)
m1 = np.arange(x*y*z*t).reshape((x*y*z, 5))
t1 = matrix2tensor(m1, shape)
m2 = tensor2matrix(t1)
assert (m1 == m2).all()
t2 = matrix2tensor(m2, shape)
assert (t1 == t2).all()
Related
I'm trying to work with a custom Feedforward implementation that takes varying rows of the input and performs some sort of operation on it.
For example, imagine if the function, f, just sums the rows and columns of an input Tensor:
f = lambda x: torch.sum(x) # sum across all dimensions, producing a scalar
Now, for the input Tensor I have an (n, m) matrix and I want to map the function f over all the rows except the row under consideration. For example, here is the vanilla implementation that works:
d = [] # append the values to d
my_tensor = torch.rand(3, 5, requires_grad=True) # = (n, m)
indices = list(range(n)) # list of indices
for i in range(n): # loop through the indices
curr_range = indices[:i] + indices[i+1:] # fetch all indices except for the current one
d.append(f(my_tensor[curr_range]) # calculate sum over all elements excluding row i
Produces a (n, 1) matrix, which is what I want. The problem is Pytorch cannot auto-differentiate over this and I'm getting errors having to do with lack of grad because I have non-primitive Torch operations:
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Turns out casting the output to using x = torch.tensor(d, requires_grad=True) did the trick in the feedforward.
I would like to get the occurrences of numbers of a 3 dimension tensor (displayed in a tensor at the corresponding indexes, like tf.math.bincount does).
For a 2d tensor, you can simply do this:
T = tf.round(25*tf.random.uniform((5,8)))
bincounts = tf.cast(tf.math.bincount(T, axis=-1),tf.float32)
But on a 3d tensor, the only way I found is looping over the third dimension, like this:
third_dim = 10
T = tf.round(25*tf.random.uniform((5,8,third_dim)))
bincounts = []
for i in range(third_dim):
bincounts.append(tf.math.bincount(T[:,:,i], axis=-1))
bincounts = tf.stack(bincounts,-1)
Does anyone know if there is a way to apply such a function directly on all the dimensions?
I found a way:
Apply bincounts on the reshaped tensor, and then reshape back to the shape you want:
third_dim = 10
T = tf.round(25*tf.random.uniform((5,8,third_dim)))
T2 = tf.reshape(T,(5*8,third_dim))
bincounts2 = tf.math.bincount(T2, axis=-1)
bincounts = tf.reshape(bincounts2, [5,8,bincounts2.shape[-1]])
I am trying to build a list of matrices using numpy, but when I try to append a matrix to an empty tensor, I get the error:
ValueError: all the input arrays must have same number of dimensions
Concatenate and append both seem to fail. I tried calling:
tensor = np.concatenate((tensor, matrix), axis=0)
and
tensor = np.append(tensor, matrix, axis=0)
but I get the same error either way.
The tensor starts with a size of [0, h, w], and the matrix is of size [h, w]. The matrix is the correct shape in the direction I want to append to, but it won't seem to attach.
It seems matrix would representing the incoming ones, while you accumulate those into tensor. So, to solve it, add a new axis with None/np.newaxis as the leading one to matrix and then concatenate with tensor -
np.concatenate((tensor, matrix[None]),axis=0)
If you are accumulating, store it back into tensor.
Or use np.vstack((tensor, matrix[None])).
Sample run -
In [16]: h,w = 3,4
...: a = np.random.rand(0,h,w)
...: b = np.random.rand(h,w)
In [17]: np.concatenate((a, b[None]),axis=0).shape
Out[17]: (1, 3, 4)
I have a 3D numpy array (1L, 420L, 580L) the 2nd and 3rd dimension is a gray scale image that I want to display using openCV. How do I pull the 2D array from the 3D array?
I created a short routine to do this, but I bet there is a better way.
# helper function to remove 1st dimension
def pull_image(in_array):
rows = in_array.shape[1] # vertical
cols = in_array.shape[2] # horizontal
out_array = np.zeros((rows, cols), np.uint8) # create new array to hold image data
for r in xrange(rows):
for c in xrange(cols):
out_array[r, c] = in_array[:, r, c]
return out_array
If you always only have the first dimension == 1, then you could simply reshape the array...
if in_array.shape[0] == 1:
return in_array.reshape(in_array.shape[1:])
otherwise, you can use numpy's advanced list slicing...
else:
return in_array[0,:,:]
I have a Numpy array of shape (4320,8640). I would like to have an array of shape (2160,4320).
You'll notice that each cell of the new array maps to a 2x2 set of cells in the old array. I would like a cell's value in the new array to be the sum of the values in this block in the old array.
I can achieve this as follows:
import numpy
#Generate an example array
arr = numpy.random.randint(10,size=(4320,8640))
#Perform the transformation
arrtrans = numpy.array([ [ arr[y][x]+arr[y+1][x]+arr[y][x+1]+arr[y+1][x+1] for x in range(0,8640,2)] for y in range(0,4320,2)])
But this is slow and more than a little ugly.
Is there a way to do this using Numpy (or an interoperable package)?
When the window fits exactly into the array, reshaping to more dimensions and collapsing the extra dimensions with np.sum is sort of the canonical way of doing this with numpy:
>>> a = np.random.rand(4320,8640)
>>> a.shape
(4320, 8640)
>>> a_small = a.reshape(2160, 2, 4320, 2).sum(axis=(1, 3))
>>> a_small.shape
(2160, 4320)
>>> np.allclose(a_small[100, 203], a[200:202, 406:408].sum())
True
I'm not sure there exists the package you want, but this code will compute much faster.
>>> arrtrans2 = arr[::2, ::2] + arr[::2, 1::2] + arr[1::2, ::2] + arr[1::2, 1::2]
>>> numpy.allclose(arrtrans, arrtrans2)
True
Where ::2 and 1::2 are translated by 0, 2, 4, ... and 1, 3, 5, ... respectively.
You are operating on sliding windows of the original array. There are numerous questions and answers on SO regarding. sliding windows and numpy and python. By manipulating the strides of an array, this process can be sped up considerably. Here is a generic function that will return (x,y) windows of the array with or without overlap. Using this stride trick appears to be just a hair slower than #mskimm's solution. It's a nice thing to have in your toolkit. This function is not mine - it was found at Efficient Overlapping Windows with Numpy
import numpy as np
from numpy.lib.stride_tricks import as_strided as ast
from itertools import product
def norm_shape(shape):
'''
Normalize numpy array shapes so they're always expressed as a tuple,
even for one-dimensional shapes.
Parameters
shape - an int, or a tuple of ints
Returns
a shape tuple
from http://www.johnvinyard.com/blog/?p=268
'''
try:
i = int(shape)
return (i,)
except TypeError:
# shape was not a number
pass
try:
t = tuple(shape)
return t
except TypeError:
# shape was not iterable
pass
raise TypeError('shape must be an int, or a tuple of ints')
def sliding_window(a,ws,ss = None,flatten = True):
'''
Return a sliding window over a in any number of dimensions
Parameters:
a - an n-dimensional numpy array
ws - an int (a is 1D) or tuple (a is 2D or greater) representing the size
of each dimension of the window
ss - an int (a is 1D) or tuple (a is 2D or greater) representing the
amount to slide the window in each dimension. If not specified, it
defaults to ws.
flatten - if True, all slices are flattened, otherwise, there is an
extra dimension for each dimension of the input.
Returns
an array containing each n-dimensional window from a
from http://www.johnvinyard.com/blog/?p=268
'''
if None is ss:
# ss was not provided. the windows will not overlap in any direction.
ss = ws
ws = norm_shape(ws)
ss = norm_shape(ss)
# convert ws, ss, and a.shape to numpy arrays so that we can do math in every
# dimension at once.
ws = np.array(ws)
ss = np.array(ss)
shape = np.array(a.shape)
# ensure that ws, ss, and a.shape all have the same number of dimensions
ls = [len(shape),len(ws),len(ss)]
if 1 != len(set(ls)):
error_string = 'a.shape, ws and ss must all have the same length. They were{}'
raise ValueError(error_string.format(str(ls)))
# ensure that ws is smaller than a in every dimension
if np.any(ws > shape):
error_string = 'ws cannot be larger than a in any dimension. a.shape was {} and ws was {}'
raise ValueError(error_string.format(str(a.shape),str(ws)))
# how many slices will there be in each dimension?
newshape = norm_shape(((shape - ws) // ss) + 1)
# the shape of the strided array will be the number of slices in each dimension
# plus the shape of the window (tuple addition)
newshape += norm_shape(ws)
# the strides tuple will be the array's strides multiplied by step size, plus
# the array's strides (tuple addition)
newstrides = norm_shape(np.array(a.strides) * ss) + a.strides
strided = ast(a,shape = newshape,strides = newstrides)
if not flatten:
return strided
# Collapse strided so that it has one more dimension than the window. I.e.,
# the new array is a flat list of slices.
meat = len(ws) if ws.shape else 0
firstdim = (np.product(newshape[:-meat]),) if ws.shape else ()
dim = firstdim + (newshape[-meat:])
# remove any dimensions with size 1
dim = filter(lambda i : i != 1,dim)
return strided.reshape(dim)
Usage:
# 2x2 windows with NO overlap
b = sliding_window(arr, (2,2), flatten = False)
c = b.sum((1,2))
Approximate 24% performance improvement using numpy.einsum
c = np.einsum('ijkl -> ij', b)
One SO Q&A example How can I efficiently process a numpy array in blocks similar to Matlab's blkproc (blockproc) function, the selected answer would work for you.