Usage of argmax from tf.nn.max_pool_with_argmax tensorflow - python

I am trying to use the argmax result of tf.nn.max_pool_with_argmax() to index another tensor. For simplicity, let's say I am trying to implement the following:
output, argmax = tf.nn.max_pool_with_argmax(input, ksize, strides, padding)
tf.assert_equal(input[argmax],output)
Now my question is how do I implement the necessary indexing operation input[argmax] to achieve the desired result? I am guessing this involves some usage of tf.gather_nd() and related calls, but I cannot figure it out. If necessary, we could assume that input has [BatchSize, Height, Width, Channel] dimensions.
Thx for your help!
Mat

I found a solution using tf.gather_ndand it works, although it seems not so elegant. I used the function unravel_argmaxthat was posted here.
def unravel_argmax(argmax, shape):
output_list = []
output_list.append(argmax // (shape[2] * shape[3]))
output_list.append(argmax % (shape[2] * shape[3]) // shape[3])
return tf.stack(output_list)
def max_pool(input, ksize, strides,padding):
output, arg_max = tf.nn.max_pool_with_argmax(input=input,ksize=ksize,strides=strides,padding=padding)
shape = input.get_shape()
arg_max = tf.cast(arg_max,tf.int32)
unraveld = unravel_argmax(arg_max,shape)
indices = tf.transpose(unraveld,(1,2,3,4,0))
channels = shape[-1]
bs = tf.shape(iv.m)[0]
t1 = tf.range(channels,dtype=arg_max.dtype)[None, None, None, :, None]
t2 = tf.tile(t1,multiples=(bs,) + tuple(indices.get_shape()[1:-2]) + (1,1))
t3 = tf.concat((indices,t2),axis=-1)
t4 = tf.range(tf.cast(bs, dtype=arg_max.dtype))
t5 = tf.tile(t4[:,None,None,None,None],(1,) + tuple(indices.get_shape()[1:-2].as_list()) + (channels,1))
t6 = tf.concat((t5, t3), -1)
return tf.gather_nd(input,t6)
In case anyone has a more elegant solution, I'd still be curious to know.
Mat

This small snippet works:
def get_results(data,other_tensor):
pooled_data, indices = tf.nn.max_pool_with_argmax(data,ksize=[1,ksize,ksize,1],strides=[1,stride,stride,1],padding='VALID',include_batch_in_index=True)
b,w,h,c = other_tensor.get_shape.as_list()
other_tensor_pooled = tf.gather(tf.reshape(other_tensor,shape= [b*w*h*c,]),indices)
return other_tensor_pooled
The above indices can be used to index the tensor. This function actually returns flattened indices and to use it with anything with batch_size > 1 you need to pass include_batch_in_index as True in-order to get proper results. I am assuming here that othertensor you has the same batch size as data.

I am doing it in this way:
def max_pool(input, ksize, strides,padding):
output, arg_max = tf.nn.max_pool_with_argmax(input=input,ksize=ksize,strides=strides,padding=padding)
shape=tf.shape(output)
output1=tf.reshape(tf.gather(tf.reshape(input,[-1]),arg_max),shape)
err=tf.reduce_sum(tf.square(tf.subtract(output,output1)))
return output1, err

Related

Dealing with memory issue (SIGKILL) when manipulating large arrays

I wrote this function to perform a rolling sum on numpy arrays, inspired by this post
def np_rolling_sum(arr, n, axis=0):
out = np.cumsum(arr, axis=axis)
slc1 = [slice(None)] * len(arr.shape)
slc2 = [slice(None)] * len(arr.shape)
slc1[axis] = slice(n, None)
slc2[axis] = slice(None, -n)
out = out[tuple(slc1)] - out[tuple(slc2)]
shape = list(out.shape)
shape[axis] = arr.shape[axis] - out.shape[axis]
out = np.concatenate((np.full(shape, 0), out), axis=axis)
return out
It works fine, except when I need to use it on large arrays (size is around 1bn). In that case, I get a SIGKILL on this line:
out = out[tuple(slc1)] - out[tuple(slc2)]
I already tried to delete arr after the cumsum since I no more need it (except from its shape that I can store before the deletion), but it didn't help.
My next guess would be to implement a batch management for the operation causing the memory issue. Is there another way for me to write this function better so it will be able to deal with larger arrays ?
Thanks for your help !
For people who might be interested, I finally added a decorator that checks if numpy arguments are greater than a given size. If so, it turns them into dask arrays.
In order to keep the main function closest to the original, I also added an argument that indicates which library should be used: numpy or dask.array
Here is the final result:
import numpy as np
import dask.array as da
threshold = 50_000_000
def large_file_handler(func):
def wrapper(*args, **kwargs):
pos = list(args)
for i in range(len(pos)):
if type(pos[i]) == np.ndarray and pos[i].size > threshold:
pos[i] = da.from_array(pos[i])
kwargs['func_lib'] = da
for k in kwargs:
if type(kwargs[k]) == np.ndarray and pos[kwargs[k]].size > threshold:
kwargs[k] = da.from_array(kwargs[k])
kwargs['func_lib'] = da
return func(*pos, **kwargs)
return wrapper
#large_file_handler
def np_rolling_sum(arr, n, axis=0, func_lib=np):
out = func_lib.cumsum(arr, axis=axis)
slc1 = [slice(None)] * len(arr.shape)
slc2 = [slice(None)] * len(arr.shape)
slc1[axis] = slice(n, None)
slc2[axis] = slice(None, -n)
out = out[tuple(slc1)] - out[tuple(slc2)]
shape = list(out.shape)
shape[axis] = arr.shape[axis] - out.shape[axis]
out = func_lib.concatenate((np.full(shape, 0), out), axis=axis)
return np.array(out)
Please feel free to tell me if this could be improved.

Implement ConvND in Tensorflow

So I need a ND convolutional layer that also supports complex numbers. So I decided to code it myself.
I tested this code on numpy alone and it worked. Tested with several channels, 2D and 1D and complex. However, I have problems when I do it on TF.
This is my code so far:
def call(self, inputs):
with tf.name_scope("ComplexConvolution_" + str(self.layer_number)) as scope:
inputs = self._verify_inputs(inputs) # Check inputs are of expected shape and format
inputs = self.apply_padding(inputs) # Add zeros if needed
output_np = np.zeros( # I use np because tf does not support the assigment
(inputs.shape[0],) + # Per each image
self.output_size, # Image out size
dtype=self.input_dtype # To support complex numbers
)
img_index = 0
for image in inputs:
for filter_index in range(self.filters):
for i in range(int(np.prod(self.output_size[:-1]))): # for each element in the output
index = np.unravel_index(i, self.output_size[:-1])
start_index = tuple([a * b for a, b in zip(index, self.stride_shape)])
end_index = tuple([a+b for a, b in zip(start_index, self.kernel_shape)])
# set_trace()
sector_slice = tuple(
[slice(start_index[ind], end_index[ind]) for ind in range(len(start_index))]
)
sector = image[sector_slice]
new_value = tf.reduce_sum(sector * self.kernels[filter_index]) + self.bias[filter_index]
# I use Tied Bias https://datascience.stackexchange.com/a/37748/75968
output_np[img_index][index][filter_index] = new_value # The complicated line
img_index += 1
output = apply_activation(self.activation, output_np)
return output
input_size is a tuple of shape (dim1, dim2, ..., dim3, channels). An 2D rgb conv for example will be (32, 32, 3) and inputs will have shape (None, 32, 32, 3).
The output size is calculated from an equation I found in this paper: A guide to convolution arithmetic for deep learning
out_list = []
for i in range(len(self.input_size) - 1): # -1 because the number of input channels is irrelevant
out_list.append(int(np.floor((self.input_size[i] + 2 * self.padding_shape[i] - self.kernel_shape[i]) / self.stride_shape[i]) + 1))
out_list.append(self.filters)
Basically, I use np.zeros because if I use tf.zeros I cannot assign the new_value and I get:
TypeError: 'Tensor' object does not support item assignment
However, in this current state I am getting:
NotImplementedError: Cannot convert a symbolic Tensor (placeholder_1:0) to a numpy array.
On that same assignment. I don't see an easy fix, I think I should change the strategy of the code completely.
In the end, I did it in a very inefficient way based in this comment, also commented here but at least it works:
new_value = tf.reduce_sum(sector * self.kernels[filter_index]) + self.bias[filter_index]
indices = (img_index,) + index + (filter_index,)
mask = tf.Variable(tf.fill(output_np.shape, 1))
mask = mask[indices].assign(0)
mask = tf.cast(mask, dtype=self.input_dtype)
output_np = array * mask + (1 - mask) * new_value
I say inefficient because I create a whole new array for each assignment. My code is taking ages to compute for the moment so I will keep looking for improvements and post here if I get something better.

Custom conv2d operation Pytorch

I have tried a custom Conv2d function which has to work similar to nn.Conv2d but the multiplication and addition used inside nn.Conv2d are replaced with mymult(num1,num2) and myadd(num1,num2).
As per insight from very helpful forums 1,2 what i can do is try unfolding it and then do matrix multiplication. That # part given in the code below can be done using loops with mymult() and myadd() as i believe this # is doing matmul.
def convcheck():
torch.manual_seed(123)
batch_size = 2
channels = 2
h, w = 2, 2
image = torch.randn(batch_size, channels, h, w) # input image
out_channels = 3
kh, kw = 1, 1# kernel size
dh, dw = 1, 1 # stride
size = int((h-kh+2*0)/dh+1) #include padding in place of zero
conv = nn.Conv2d(in_channels=channels, out_channels=out_channels, kernel_size=kw, padding=0,stride=dh ,bias=False)
out = conv (image)
#print('out', out)
#print('out.size()', out.size())
#print('')
filt = conv.weight.data
imageunfold = F.unfold(image,kernel_size=kh,padding=0,stride=dh)
print("Unfolded image","\n",imageunfold,"\n",imageunfold.shape)
kernels_flat = filt.view(out_channels,-1)
print("Kernel Flat=","\n",kernels_flat,"\n",kernels_flat.shape)
res = kernels_flat # imageunfold # I have to replace this operation with mymult() and myadd()
print(res,"\n",res.shape)
#print(res.size(2),"\n",res.shape)
res = res.view(-1, out_channels, size, size)
#print("Same answer as buitlin function",res)
res = kernels_flat # imageunfold can be replaced with this. although there can be some other efficient implementation which i am looking to get help for.
for m_batch in range(len(imageunfold)):
#iterate through rows of X
# iterate through columns of Y
for j in range(imageunfold.size(2)):
# iterate through rows of Y
for k in range(imageunfold.size(1)):
#print(result[m_batch][i][j]," +=", kernels_flat[i][k], "*", imageunfold[m_batch][k][j])
result[m_batch][i][j] += kernels_flat[i][k] * imageunfold[m_batch][k][j]
Can someone please help me vectorize these three loops for faster execution.
The problem was with the dimesions as kernels_flat[dim0_1,dim1_1] and imageunfold[batch,dim0_2,dim1_2] the resultant should have [batch,dim0_1,dim1_2]
res = kernels_flat # imageunfold can be replaced with this. although there can be some other efficient implementation.
for m_batch in range(len(imageunfold)):
#iterate through rows of X
# iterate through columns of Y
for j in range(imageunfold.size(2)):
# iterate through rows of Y
for k in range(imageunfold.size(1)):
#print(result[m_batch][i][j]," +=", kernels_flat[i][k], "*", imageunfold[m_batch][k][j])
result[m_batch][i][j] += kernels_flat[i][k] * imageunfold[m_batch][k][j]
Your code for the matrix multiplication is missing a loop for iterating over the filters.
In the code below I fixed your implementation.
I am currently also looking for optimizations on the code. In my use case, the individual results of the multiplications (without performing addition) need to be accessible after computation. I will post here in case I find a faster solution than this.
for batch_image in range (imageunfold.shape[0]):
for i in range (kernels_flat.shape[0]):
for j in range (imageunfold.shape[2]):
for k in range (kernels_flat.shape[1]):
res_c[batch_image][i][j] += kernels_flat[i][k] * imageunfold[batch_image][k][j]

How to fix 'need at least one array to concatenate' error?

I have read through the various posts on ValueError but I'm not getting much satisfactory solution. Please, can anyone help me what I am doing wrong??
Code:
assert(type(images) == list)
# assert(type(images[0]) == np.ndarray)
# assert(len(images[0].shape) == 3)
# assert(np.max(images[0]) > 10)
# assert(np.min(images[0]) >= 0.0)
inps = []
for img in images:
img = img.astype(np.float32)
inps.append(np.expand_dims(img, 0))
bs = 100
with tf.Session() as sess:
preds = []
n_batches = int(math.ceil(float(len(inps)) / float(bs)))
for i in range(n_batches):
sys.stdout.write(".")
sys.stdout.flush()
inp = inps[(i * bs):min((i + 1) * bs, len(inps))]
inp = np.concatenate(inp, 0)
pred = sess.run(softmax, {'ExpandDims:0': inp})
preds.append(pred)
preds = np.concatenate(preds, 0)
scores = []
for i in range(splits):
part = preds[(i * preds.shape[0] // splits):((i + 1) * preds.shape[0] // splits), :]
kl = part * (np.log(part) - np.log(np.expand_dims(np.mean(part, 0), 0)))
kl = np.mean(np.sum(kl, 1))
scores.append(np.exp(kl))
return np.mean(scores), np.std(scores)
Error :
>File "/content/Inception-Score/inception_score.py", line 45, in >get_inception_score
> preds = np.concatenate(preds, 0)
>ValueError: need at least one array to concatenate
It appears that you are missing the argument for the array you would like to concatenate. You specified the initial array and the axis to concatenate on, but not the second array -- hence "need at least one array to concatenate".
np.concatenate() has a minimum of two arrays in the first argument, as detailed in the documentation here. Looks like "preds" is only one array. I am not sure what you are trying to do, but maybe concatenate is not what you want?
The problem seems to be in np.concatenate where it expects an array of arrays and you are not providing that
#syntax
numpy.concatenate((a1, a2, ...), axis=0, out=None)
Parameters:
a1, a2, … : sequence of array_like The arrays must have the same shape, except in the dimension corresponding to axis (the first, by default).
axis : int, optional The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0.
out : ndarray, optional If provided, the destination to place the result. The shape must be correct, matching that of what concatenate would have returned if no out argument were specified.
Returns: ndarray The concatenated array.
check preds what it returns

OpenCV Python cv2.perspectiveTransform

I'm currently trying to video stabilization using OpenCV and Python.
I use the following function to calculate rotation:
def accumulate_rotation(src, theta_x, theta_y, theta_z, timestamps, prev, current, f, gyro_delay=None, gyro_drift=None, shutter_duration=None):
if prev == current:
return src
pts = []
pts_transformed = []
for x in range(10):
current_row = []
current_row_transformed = []
pixel_x = x * (src.shape[1] / 10)
for y in range(10):
pixel_y = y * (src.shape[0] / 10)
current_row.append([pixel_x, pixel_y])
if shutter_duration:
y_timestamp = current + shutter_duration * (pixel_y - src.shape[0] / 2)
else:
y_timestamp = current
transform = getAccumulatedRotation(src.shape[1], src.shape[0], theta_x, theta_y, theta_z, timestamps, prev,
current, f, gyro_delay, gyro_drift)
output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform)
current_row_transformed.append(output)
pts.append(current_row)
pts_transformed.append(current_row_transformed)
o = utilities.meshwarp(src, pts_transformed)
return o
I get the following error when it gets to output = cv2.perspectiveTransform(np.array([[pixel_x, pixel_y]], dtype="float32"), transform):
cv2.error: /Users/travis/build/skvark/opencv-python/opencv/modules/core/src/matmul.cpp:2271: error: (-215) scn + 1 == m.cols in function perspectiveTransform
Any help or suggestions would really be appreciated.
This implementation really needs to be changed in a future version, or the docs should be more clear.
From the OpenCV docs for perspectiveTransform():
src – input two-channel (...) floating-point array
Slant emphasis added by me.
>>> A = np.array([[0, 0]], dtype=np.float32)
>>> A.shape
(1, 2)
So we see from here that A is just a single-channel matrix, that is, two-dimensional. One row, two cols. You instead need a two-channel image, i.e., a three-dimensional matrix where the length of the third dimension is 2 or 3 depending on if you're sending in 2D or 3D points.
Long story short, you need to add one more set of brackets to make the set of points you're sending in three-dimensional, where the x values are in the first channel, and the y values are in the second channel.
>>> A = np.array([[[0, 0]]], dtype=np.float32)
>>> A.shape
(1, 1, 2)
Also, as suggested in the comments:
If you have an array points of shape (n_points, dimension) (i.e. dimension is 2 or 3), a nice way to re-format it for this use-case is points[np.newaxis]
It's not intuitive, and though it's documented, it's not very explicit on that point. That's all you need. I've answered an identical question before, but for the cv2.transform() function.

Categories