In all of the examples it seems that addSample(input, target) is used with 1 dimensional arrays, such as:
INPUT = 5
OUTPUT = 1
input = [5, 5, 5, 5, 5]
target = [1]
ds = SequentialDataSet(5, 1)
#add data using addSample
How does one do this when the input is multi-dimensional in this way:
input = [[5, 5, 5, 5, 5], [5, 5, 5, 5, 5]]
target = [1]
How does one use addSample with such structures? I tried this:
ds = SequentialDataSet(2, 1)
ds.addSample(input, target)
and get the error message:
Could not broadcast input array from shape (2, 5) into shape 2.
Meaning the SequentialDataSet(2, 1) does not work for this structure, but SequentialDataSet((2, 5), 1) also errors. This should be easy but I cannot find the answer.
It looks like you're trying to train some sort of Feed Forward network, perhaps a multi-layer perceptron? 5 layers in, one or more hidden layers, and a single output layer but it's not clear so this is a leap on my end.
Either way your input layer should be a single array. If you have a structure, or multi-dimensional array you'll need to collapse it and feed it in as a single set of data. So for your 5x2 suggestion you'd simply have 10 elements on the input, and you would be responsible for "parsing" your input structures consistently as they're fed into the network. For a 5x5 structure you'd have 25 inputs etc.
In my experience a big part of the success/challenge with ANNs is structuring the data in so that the input form is normalized and represented in a way that the network can mathematically find a pattern with.
According to the post linked beneath you should just input one array:
Pybrain multi dimensional data input
For SequentialDataSet I used this example:
data = [(1,2), (1,3), (10,2), (2,0), (2,9), (4,3), (1,2), (10,5)]
ds = SequentialDataSet(2,2)
for sample, next_sample in zip(data, cycle(test_data[1:])):
ds.addSample(sample, next_sample)
Related
I have a packed data and each sequences' length.
Example:
data = torch.tensor([4, 1, 3, 5, 2, 6])
lengths = torch.tensor([2,1,3])
I want to create a pad 2-D (batch_size,max_lengths) matrix like:
output = torch.tensor([[4,1,0], #length=2
[3,0,0],#length=1
[5,2,6])#length=3
And due to my training purpose, this operation should be able to track backward gradient.
I have a tf.Tensor of, for example, shape (31, 6, 6, 3).
I want to perform tf.signal.fft2d on the shapes 6, 6 so, in other words, in the middle. However, the description says:
Computes the 2-dimensional discrete Fourier transform over the inner-most 2 dimensions of input
I could do it with a for loop but I fear it might be very ineffective. Is there a fastest way?
The result must have the same output shape of course.
Thanks to this I implemented this solution using tf.transpose:
in_pad = tf.transpose(in_pad, perm=[0, 3, 1, 2])
out = tf.signal.fft2d(tf.cast(in_pad, tf.complex64))
out = tf.transpose(out, perm=[0, 2, 3, 1])
Is there a handy way to divide a list of tensors by a list of scalars? I'm trying to do something similar to the following, but get the indicated error on the last line:
import tensorflow as tf
tf.__version__
# '1.13.1'
import numpy as np
ds1 = tf.data.Dataset.from_tensor_slices(np.random.random([10, 3, 4])).batch(5, drop_remainder=True)
i = ds1.make_one_shot_iterator()
n = i.get_next()
n.shape
# TensorShape([Dimension(5), Dimension(3), Dimension(4)])
var = tf.Variable([1,2,3,4,5], dtype=np.float64)
op = n/var
# Traceback ....
# ValueError: Dimensions must be equal, but are 4 and 5 for 'truediv' (op: 'RealDiv') with input shapes: [5,3,4], [5].
My desired result is a list of tensors of shape [5, 3, 4] where the entries in the first are divided by 1, in the second by 2, in the third by 3, and so on. (The values 1-5 are standing in for computed values in my actual code.)
I'm pretty sure that the answer is going to be something easy, but I can't find the right set of search keywords to get SO or Google to cooperate.
As suggested by a commenter, reshaping var works:
op = n/tf.reshape(var, (5, 1, 1))
This yields the expected result. Notionally, I was originally asking too much in the way of shape inference from Tensorflow.
I am trying to create a custom layer that is similar to Max Pooling or the first step of a separable convolution.
For example with a 2-Tensor in which I want to extract the non-overlapping 2x2 patches:
if I have the [4,4] tensor
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9,10,11],
[12,13,14,15]]
I want to end up with the following [2,2,4] Tensor
[[[ 0, 1, 4, 5],[ 2, 3, 6, 7]],
[[ 8, 9,12,13],[10,11,14,15]]]
For a 3-Tensor, I want something similar but to also separate out the 3rd dimension. tf.extract_image_patches almost does what I want, but it folds the "depth" dimension into each patch.
Ideally if I had a tensor of shape [32,64,7] and wanted to extract all the [2,2] patches out of it: I would end up with a shape of [16,32,7,4]
To be clear, I just want to extract the patches, not to actually do max pooling nor separable convolution.
Since I am not actually augmenting the data, I suspect that you can do it with some tf.reshape trickery... Is there any nice way to achieve this in tensorflow without resorting to slicing+stitching/for loops?
Also, what is the correct terminology for this operation? Windowing? Tiling?
Turns out this is really easy to do with tf.transpose. The solution that ended up working for me is:
#Assume x is in BHWC form
def pool(x,size=2):
channels = x.get_shape()[-1]
x = tf.extract_image_patches(
x,
ksizes=[1,size,size,1],
strides=[1,size,size,1],
rates=[1,1,1,1],
padding="SAME"
)
x = tf.reshape(x,[-1],x.get_shape()[1:3]+[size**2,channels])
x = tf.transpose(x,[0,1,2,4,3])
return x
I have a simple, one dimensional Python array with random numbers. What I want to do is convert it into a numpy Matrix of a specific shape. My current attempt looks like this:
randomWeights = []
for i in range(80):
randomWeights.append(random.uniform(-1, 1))
W = np.mat(randomWeights)
W.reshape(8,10)
Unfortunately it always creates a matrix of the form:
[[random1, random2, random3, ...]]
So only the first element of one dimension gets used and the reshape command has no effect. Is there a way to convert the 1D array to a matrix so that the first x items will be row 1 of the matrix, the next x items will be row 2 and so on?
Basically this would be the intended shape:
[[1, 2, 3, 4, 5, 6, 7, 8],
[9, 10, 11, ... , 16],
[..., 800]]
I suppose I can always build a new matrix in the desired form manually by parsing through the input array. But I'd like to know if there is a simpler, more eleganz solution with built-in functions I'm not seeing. If I have to build those matrices manually I'll have a ton of extra work in other areas of the code since all my source data comes in simple 1D arrays but will be computed as matrices.
reshape() doesn't reshape in place, you need to assign the result:
>>> W = W.reshape(8,10)
>>> W.shape
(8,10)
You can use W.resize(), ndarray.resize()