I have a 512x512 image array and I want to perform operations on 8x8 blocks. At the moment I have something like this:
output = np.zeros(512, 512)
for i in range(0, 512, 8):
for j in rangerange(0, 512, 8):
a = input[i:i+8, j:j+8]
b = some_other_array[i:i+8, j:j+8]
output[i:i+8, j:j+8] = np.dot(a, b)
where a & b are 8x8 blocks derived from the original array. I would like to speed up this code by using vectorised operations. I have reshaped my inputs like this:
input = input.reshape(64, 8, 64, 8)
some_other_array = some_other_array.reshape(64, 8, 64, 8)
How could I perform a dot product on only axes 1 & 3 to output an array of shape (64, 8, 64, 8)?
I have tried np.tensordot(input, some_other_array, axes=([0, 1], [2, 3])) which gives the correct output shape, but the values do not match the output from the loop above. I've also looked at np.einsum but I haven't come across a simple example with what I'm trying to achieve.
As you suspected, np.einsum can take care of this. If input and some_other_array have shapes (64, 8, 64, 8), then if you write
output = np.einsum('ijkl,ilkm->ijkm', input, some_other_array)
then output will also have shape (64, 8, 64, 8), where matrix multiplication (i.e. np.dot) has been done only on axes 1 and 3.
The string argument to np.einsum looks complicated, but really it's a combination of two things. First, matrix multiplication is given by jl,lm->jm (see e.g. this answer on einsum). Second, we don't want to do anything to axis 0 and 2, so for them I just write ik,ik->ik. Combining the two gives ijkl,ilkm->ijkm.
They'll work if you reorder them a bit. If input and some_other_array are both shaped (64,8,64,8), then:
input = input.transpose(0,2,1,3)
some_other_array = some_other_array.transpose(0,2,1,3)
This will reorder them to 64,64,8,8. At this point you can compute a matrix multiplication. Do note that you need matmul to compute the block products, and not dot, which will try to multiply the entire matrices.
output = input # some_other_array
output = output.transpose(0,2,1,3)
output = output.reshape(512,512)
Related
I am trying to mask an array (called dataset) in python:
The array has the following size (5032, 48, 48). Basically these are 5032 48x48 images. But some of the images may not contain any data, so there might only be 0's there. These are the ones I want to mask.
I tried the following: (dataset[:] == 0).all(axis=0).
When I print the shape of the above operation I get (5032, 48) which is not what I want. I expected (5032, ).
I am not sure what I am doing wrong.
I wanted to create a mask with the size (5032, ) which has True (if there is at least one value in the 48x48 array that is nonzero) and False (if there are only zero values in the 48x48 array) values.
Thanks for your help
Kind of a hacky way, but just sum across the last two axis and check if the sum is zero.
nonzero_images = images[np.where(np.sum(images, axis = (1, 2)) == 0)]
You can try something like
# sample data - 3 nonzeros and 2 zeros
dataset = np.concatenate([np.ones((3, 48, 48)), np.zeros((2, 48, 48))])
new = dataset[np.unique(np.where(dataset.all(axis=1))[0])]
print(f'Dataset Shape: {dataset.shape}\nNew Shape: {new.shape}')
# Dataset Shape: (5, 48, 48)
# New Shape: (3, 48, 48)
I want to get dot product of two arrays along the batch dimension. np.dot gave a super weird result. Let suppose I have a batch of size 2. So what would be the proper way to get the results?
X = np.random.randn(2,3,4)
X_t = np.transpose(X,axes=[0,2,1]) # shape now is [2,4,3]
np.matmul(X,X_t) # shape is [2,3,3]
np.dot(X,X_t) # shape is [2,3,2,3] SUPER Weird
np.einsum('ijk,ikl->ijl',X,X_t) # Dimension as [2,3,3] Same as Matmul()
What is the correct way of matrix multiplication for conditions like these?
Use # operator. It reduces the first (0th) dimention.
Matmul for other dims.
import numpy as np
x = np.random.randn(2, 3, 4)
x_t = np.transpose(x, axes=[0, 2, 1]) # shape now is [2,4,3]
wrong = np.dot(x, x_t) # shape is [2,3,2,3] SUPER Weird
res = x # x_t
print(res.shape)
print(wrong.shape)
out:
(2, 3, 3)
(2, 3, 2, 3)
I have a tensor and I want to multiply that tensor into a list of vectors. An example of minimal code is below.
tensor=np.arange(4*5*6).reshape(4,5,6)
vectorList=[]
vectorList.append(np.array([0,1,2,3,4,5]))
vectorList.append(np.array([6,7,8,9,10,11]))
vectorList.append(np.array([12,13,14,15,16,17]))
results=[]
for i in vectorList:
results.append(tensor*i)
This produces the required results but that for loop is just to slow. I can't seem to find a numpy/tensorflow elementwise multiply command that will allow this multiplcation after a stacking of the vectorList.
I'm willing to trade memory for speed if that is an issue. I'm also unconcern about the container for the results. A list of tensors or a 4 dimensional tensor are fine here.
Edit: Updated the vectors in the list to make the question more clear.
Your 3d array (renamed from tensor):
In [448]: arr.shape
Out[448]: (4, 5, 6)
vectosList as array is 2d:
In [449]: np.array(vectorList).shape
Out[449]: (3, 6)
And your results, as array, is 4d:
In [450]: np.array(results).shape
Out[450]: (3, 4, 5, 6)
Thanks for giving a nice diverse set of dimensions. It's easier to track them that way.
We can produce the same 4d array with broadcasting. I've included all the ':' just to highlight how dimensions are paired:
In [451]: res = np.array(vectorList)[:,None,None,:]*arr[None,:,:,:]
In [452]: res.shape
Out[452]: (3, 4, 5, 6)
In [453]: np.allclose(res, np.array(results))
Out[453]: True
res=np.array(vectorList)[:,None,None]*arr is the same thing.
A deleted answer suggested einsum, the correct expression is np.einsum('il,jkl->ijkl',vectorList, arr).
I have a tf.Tensor of, for example, shape (31, 6, 6, 3).
I want to perform tf.signal.fft2d on the shapes 6, 6 so, in other words, in the middle. However, the description says:
Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input
I could do it with a for loop but I fear it might be very ineffective. Is there a fastest way?
The result must have the same output shape of course.
Thanks to this I implemented this solution using tf.transpose:
in_pad = tf.transpose(in_pad, perm=[0, 3, 1, 2])
out = tf.signal.fft2d(tf.cast(in_pad, tf.complex64))
out = tf.transpose(out, perm=[0, 2, 3, 1])
In all of the examples it seems that addSample(input, target) is used with 1 dimensional arrays, such as:
INPUT = 5
OUTPUT = 1
input = [5, 5, 5, 5, 5]
target = [1]
ds = SequentialDataSet(5, 1)
#add data using addSample
How does one do this when the input is multi-dimensional in this way:
input = [[5, 5, 5, 5, 5], [5, 5, 5, 5, 5]]
target = [1]
How does one use addSample with such structures? I tried this:
ds = SequentialDataSet(2, 1)
ds.addSample(input, target)
and get the error message:
Could not broadcast input array from shape (2, 5) into shape 2.
Meaning the SequentialDataSet(2, 1) does not work for this structure, but SequentialDataSet((2, 5), 1) also errors. This should be easy but I cannot find the answer.
It looks like you're trying to train some sort of Feed Forward network, perhaps a multi-layer perceptron? 5 layers in, one or more hidden layers, and a single output layer but it's not clear so this is a leap on my end.
Either way your input layer should be a single array. If you have a structure, or multi-dimensional array you'll need to collapse it and feed it in as a single set of data. So for your 5x2 suggestion you'd simply have 10 elements on the input, and you would be responsible for "parsing" your input structures consistently as they're fed into the network. For a 5x5 structure you'd have 25 inputs etc.
In my experience a big part of the success/challenge with ANNs is structuring the data in so that the input form is normalized and represented in a way that the network can mathematically find a pattern with.
According to the post linked beneath you should just input one array:
Pybrain multi dimensional data input
For SequentialDataSet I used this example:
data = [(1,2), (1,3), (10,2), (2,0), (2,9), (4,3), (1,2), (10,5)]
ds = SequentialDataSet(2,2)
for sample, next_sample in zip(data, cycle(test_data[1:])):
ds.addSample(sample, next_sample)