Recently I use theano to create a gragh whitch is used for identifying flowers, however, the output of theano's inner function seem not be the type that I expect, for example:
a = numpy.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
sum = theano.tensor.sum(a, axis = 1)
sum_array = numpy.asarray(sum, dtype = numpy.float32)
I don't know why it doesn't work, simply I just want to creat an array to store the sum-result.
It just a simple example, in my project, I use the function "conv2d" and create an output after convolving the images, but I can't get the information of the output like the shape:
conv_out = conv2d(input, filter_shape, image_shape, ...)
output = theano.tensor.tanh(con_out, bias.dimshuffle('x','0','x','x'))
How can I change the 'output' into a 4D matrix and conveniently get its shape and other information?
Theano is different from regular python in that what you are creating are symbolic functions.
You need call Theano.function() for the symbolic function to be compiled. Then you need to call the resulting function with parameters.
Related
I have a tensor a = torch.arange(6).reshape(2,3), and another tensor b=(torch.rand(a.size())> 0.5).int().nonzero().
I want to create a new tensor that contains only values from a of the indices that are indicated by b.
For example:
a = torch.arange(6).reshape(2,3) # tensor([[0, 1, 2],
# [3, 4, 5]])
b = (torch.rand(a.size())> 0.5).int().nonzero() # tensor([[0, 1],
# [0, 2],
# [1, 0],
# [1, 1]])
The desired output is:
tensor([1,2,3,4])
I know that I can iterate over the values of b and access those values in a as indices but I wanted to know if there is a better Pytorch way to to this (using tensor operations only).
** The shape of the output tensor doesn't really matter, I just need to have a tensor with only the values indicated by b.
If I understand you correctly, you can do:
a[b[:,0], b[:,1]]
This will produce a 1D tensor with the values at the indices specified by b. Note that the output might not be the same from run to run of the program since the indices are selected nondeterministically.
If you don't know the number of dimension in advance, you'll need to use map() to generate the desired slices:
a[tuple(map(lambda x: b[:,x], range(a.dim())))]
I have a tensor for example called tensor1 of shape (1,20,4). I am trying to create a tensor using certain indices (1,4,5) from this tensor. I could do this form numpy for example using: tensor[:,[1,4,5],:]. From what I understand this could be done using "tf.gather_nd" but I don't really see how it could be done.
What you want can be done with tf.gather:
tensor2 = tf.gather(tensor1, [1, 4, 5], axis=1)
Is there any way to do some computation on a tensor in graph.
Example my graph:
slim = tf.contrib.slim
def slim_graph(images, train=False):
with slim.arg_scope([slim.conv2d, slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.01),
weights_regularizer=slim.l2_regularizer(0.0005)):
net = slim.repeat(images, 2, slim.conv2d, 64, [3, 3], scope='conv1')
// Do my compute by numpy on net
np_array_result = my_func(net)
// It will return a numpy array
// Use numpy array as input of graph
net = slim.max_pool2d(np_array_result, [2, 2], scope='pool1')
...
return logits
Can we do somethings like that?
How to get feature maps in graph to compute?
I can separate graph into 2 parts and use Session.run([part1])
After that use the result to input my function, then feed it to Session.run([part2])
But it seems weird.
You can use tf.py_func wrapper for python functions.
In all of the examples it seems that addSample(input, target) is used with 1 dimensional arrays, such as:
INPUT = 5
OUTPUT = 1
input = [5, 5, 5, 5, 5]
target = [1]
ds = SequentialDataSet(5, 1)
#add data using addSample
How does one do this when the input is multi-dimensional in this way:
input = [[5, 5, 5, 5, 5], [5, 5, 5, 5, 5]]
target = [1]
How does one use addSample with such structures? I tried this:
ds = SequentialDataSet(2, 1)
ds.addSample(input, target)
and get the error message:
Could not broadcast input array from shape (2, 5) into shape 2.
Meaning the SequentialDataSet(2, 1) does not work for this structure, but SequentialDataSet((2, 5), 1) also errors. This should be easy but I cannot find the answer.
It looks like you're trying to train some sort of Feed Forward network, perhaps a multi-layer perceptron? 5 layers in, one or more hidden layers, and a single output layer but it's not clear so this is a leap on my end.
Either way your input layer should be a single array. If you have a structure, or multi-dimensional array you'll need to collapse it and feed it in as a single set of data. So for your 5x2 suggestion you'd simply have 10 elements on the input, and you would be responsible for "parsing" your input structures consistently as they're fed into the network. For a 5x5 structure you'd have 25 inputs etc.
In my experience a big part of the success/challenge with ANNs is structuring the data in so that the input form is normalized and represented in a way that the network can mathematically find a pattern with.
According to the post linked beneath you should just input one array:
Pybrain multi dimensional data input
For SequentialDataSet I used this example:
data = [(1,2), (1,3), (10,2), (2,0), (2,9), (4,3), (1,2), (10,5)]
ds = SequentialDataSet(2,2)
for sample, next_sample in zip(data, cycle(test_data[1:])):
ds.addSample(sample, next_sample)
I have a simple, one dimensional Python array with random numbers. What I want to do is convert it into a numpy Matrix of a specific shape. My current attempt looks like this:
randomWeights = []
for i in range(80):
randomWeights.append(random.uniform(-1, 1))
W = np.mat(randomWeights)
W.reshape(8,10)
Unfortunately it always creates a matrix of the form:
[[random1, random2, random3, ...]]
So only the first element of one dimension gets used and the reshape command has no effect. Is there a way to convert the 1D array to a matrix so that the first x items will be row 1 of the matrix, the next x items will be row 2 and so on?
Basically this would be the intended shape:
[[1, 2, 3, 4, 5, 6, 7, 8],
[9, 10, 11, ... , 16],
[..., 800]]
I suppose I can always build a new matrix in the desired form manually by parsing through the input array. But I'd like to know if there is a simpler, more eleganz solution with built-in functions I'm not seeing. If I have to build those matrices manually I'll have a ton of extra work in other areas of the code since all my source data comes in simple 1D arrays but will be computed as matrices.
reshape() doesn't reshape in place, you need to assign the result:
>>> W = W.reshape(8,10)
>>> W.shape
(8,10)
You can use W.resize(), ndarray.resize()