Input Tensor into tensorflow graph - python

I am following tutorial on simple audio recognition and currently editing the label_wav.py. In original case we input wave file and the graph predicts the label (in between it calculates spectrum, mfcc's inside the graph). Now i am looking to input mfcc's directly rather than inputting the wave file. Run the graph by inputting the mfcc tensor.
# mfccs: Tensor("strided_slice:0", shape=(1, 98, 40), dtype=float32)
mfcc_input_layer_name = 'Reshape:0'
with tf.Session() as sess:
predictions, = sess.run(softmax_tensor, {mfcc_input_layer_name: mfcc})
After a bit of googling, i found some discussion in git and created a session_handle.
# mfccs: Tensor("strided_slice:0", shape=(1, 98, 40), dtype=float32)
mfcc_input_layer_name = 'Reshape:0'
with tf.Session() as sess:
h = tf.get_session_handle(mfccs)
h = sess.run(h)
predictions, = sess.run(softmax_tensor, {mfcc_input_layer_name: h})
The code is working as expected but I am wondering if there could be a better way of dealing with the tensor rather than creating the handle and then passing it?

I suppose you want to replace an intermediate Tensor with a value by feed_dict. If you have a Tensor object, you can replace it by feed_dict as the following
a = tf.constant(3, name="a")
b = tf.constant(4, name="b")
c = tf.add(a, b, name="c")
d = c * 3
with tf.Session() as sess:
print(sess.run(d))
print(sess.run(d, feed_dict={c: 2}))
Even though you don't have the Tensor object, you can get it by get_tensor_by_name
a = tf.constant(3, name="a")
b = tf.constant(4, name="b")
c = tf.add(a, b, name="c")
d = c * 3
with tf.Session() as sess:
c_tensor = tf.get_default_graph().get_tensor_by_name("c:0")
print(sess.run(d, feed_dict={c_tensor: 2}))

Related

The Way to Connect Multiple Neural Networks in a Series(Not Parallel)

I wonder there is any way to connect multiple NN as a series in tensorflow.
For example, input features to DNN structure, and get the result values for input data of RNN structure.
Example code:
import tensorflow as tf
import numpy as np
a = 50 #batch_size
b = 60 #sequence in RNN
c = 40 #features
d = 6 #label classes
rnn_size = b
x_data = np.random.rand(a,b,c)
y_data = np.random.randint(0,high=d,size=[a,1])
tf.reset_default_graph()
X = tf.placeholder(tf.float32, shape=[None,b,c])
Y = tf.placeholder(tf.float32, shape=[None,d])
X = tf.transpose(X, (1,0,2))
X = tf.reshape(X, (-1,c))
X = tf.split(X, b)
hidden_units = [40,20,10]
#DNN Structure
dnn = []
for i in range(len(hidden_units)):
if i == 0:
T = X
else:
T = dnn[-1]
dnn.append(tf.layers.dense(T, hidden_units[i], activation=tf.nn.relu, kernel_initializer=tf.contrib.layers.xavier_initializer()))
# RNN Structure
rnn = {'w': tf.Variable(tf.random_normal([rnn_size, d], stddev = 0.01), dtype=tf.float32),
'b': tf.Variable(tf.random_normal([d], stddev = 0.01), dtype=tf.float32)}
cell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size)
outputs, states = tf.contrib.rnn.static_rnn(cell, dnn[-1], dtype=tf.float32)
output = tf.matmul(outputs[-1], rnn['w'])+rnn['b']
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(labels=Y,logits=output))
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
correct = tf.equal(tf.argmax(output,1),tf.argmax(cost,1))
acc = tf.reduce_mean(tf.cast(correct, tf.float32))
# Run Session
sess = tf.Session()
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()])
_, c = sess.run([optimizer, cost],feed_dict={X: x_data, Y: tf.Session().run(tf.one_hot(y_data), d)})
print('Accuracy: ', sess.run(acc, feed_dict={X: x_data, Y: tf.Session().run(tf.one_hot(y_data), d)}))
When I run this code, there is an error raised:
File "C:\Anaconda3\Lib\site-packages\tensorflow\python\layers\core.py", line 250, in dense
dtype=inputs.dtype.base_dtype,
AttributeError: 'list' object has no attribute 'dtype'
it seems to be related with type of 'dnn[-1]'
Is there a connective function or data type controller for the connection of the neural networks?
I've solved the problem, finally.
The reason of error was little bit ambiguous but X was recognized by 'list' after running 'tf.split', finitely...
After I generate a list of DNN Structures which as a length of the sequence, as following:
seq = []
for i in range(b):
###dnn structure for i-th array of split###
seq.append(dnn structure)
and tuned some codes, then the whole code worked well.
Thanks for an attention :)

In Tensorflow, what is the difference between the returned 'output' and 'h' of state tuple (c, h) in LSTMCell?

I've searched across many tutorials/blogs/guides and official Tensorflow documentation to understand this. For example, see below lines:
lstm = tf.nn.rnn_cell.LSTMCell(512)
output, state_tuple = lstm(current_input, last_state_tuple)
Now if I unpack state,
last_cell_memory, last_hidden_state = state_tuple
Both output and last_hidden_state have exact same dimensions of [batch_size, 512]. Can both be used interchangeably? I mean, can I do this? :
last_state_tuple= last_cell_memory, output
and then feed last_state_tuple in lstm?
Jacques's answer is correct, but it doesn't mention an important point: the state of LSTM layer almost always equals to the output. The difference becomes important when the chain of LSTM cells is long and not all input sequences have equal length (and hence are padded). That's when you should distinguish the state and output.
See the runnable example in my answer on a similar question (it uses BasicRNNCell, but you'll get the same result with LSTMCell).
Yes, the second element of the state is the same as the output.
From https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMStateTuple
Stores two elements: (c, h), in that order. Where c is the hidden state and h is the output.
Also to verify experimentally:
import tensorflow as tf
from numpy import random as rng
lstm = tf.nn.rnn_cell.LSTMCell(10)
inp = tf.placeholder(tf.float32, shape=(1, 10))
stt = tf.placeholder(tf.float32, shape=(1, 10))
hdd = tf.placeholder(tf.float32, shape=(1, 10))
out = lstm(inp, (stt, hdd))
sess = tf.InteractiveSession()
init = tf.global_variables_initializer()
sess.run(init)
a = rng.randn(1, 10)
b = rng.randn(1, 10)
c = rng.randn(1, 10)
output = sess.run(out, {inp: a, stt: b, hdd: c})
assert (output[0] == output[1][1]).all()

Implementation of a neural model on Tensor-flow

I am trying to implement a neural network model on Tensor flow but seems to be having problems with the shape of the placeholders. I'm new to TF, hence it could just be a simple misunderstanding. Here's my code and data sample:
_data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....]
The data comprises of arrays of 4 columns, the last column of each array is the label. I want to classify each array as label 0, label 1 or label 2.
import tensorflow as tf
import numpy as np
_data = datamatrix
X = tf.placeholder(tf.float32, [None, 3])
W = tf.Variable(tf.zeros([3, 1]))
b = tf.Variable(tf.zeros([3]))
init = tf.global_variables_initializer()
Y = tf.nn.softmax(tf.matmul(X, W) + b)
# placeholder for correct labels
Y_ = tf.placeholder(tf.float32, [None, 1])
# loss function
import time
start=time.time()
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
optimizer = tf.train.GradientDescentOptimizer(0.003)
train_step = optimizer.minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
for i in range(1000):
# load batch of images and correct answers
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1] for x in _data[:2000]]
train_data={X: batch_X, Y_: batch_Y}
# train
sess.run(train_step, feed_dict=train_data)
# success ?
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I got the following error message after running my code:
ValueError: Cannot feed value of shape (2000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)'
My desired output should be the performance of the model using cross-entropy; the accuracy value from the codeline below:
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I would also appreciate any suggestions on how to improve the model, or a model that is more suitable for my data.
The shape of Placeholder_1:0 Y_, and input data batch_Y is mismatched as specified by the error message. Notice the 1-D vs 2-D array.
So you should either define 1-D place holder:
Y_ = tf.placeholder(tf.float32, [None])
or prepare 2-D data:
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1:] for x in _data[:2000]]

Shapes in Tensorflow

I am new to Tensorflow and I have problems with combining shapes (n,) with shapes (n,1).
I have this code:
if __name__ == '__main__':
trainSetX, trainSetY = utils.load_train_set()
# create placeholders & variables
X = tf.placeholder(tf.float32, shape=(num_of_features,))
y = tf.placeholder(tf.float32, shape=(1,))
W, b = initialize_params()
# predict y
y_estim = linear_function(X, W, b)
y_pred = tf.sigmoid(y_estim)
# set the optimizer
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_pred)
loss_mean = tf.reduce_mean(loss)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=alpha).minimize(loss_mean)
# training phase
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for idx in range(num_of_examples):
cur_x, cur_y = trainSetX[idx], trainSetY[idx]
_, c = sess.run([optimizer, loss_mean], feed_dict={X: cur_x, y: cur_y})
I am trying to implement a stochastic gradient descent by feeding one example at the time. The problem is that it seems to feed the data in shape (num_of_features,), while I need (num_of_features,1) for the correct usage of the other functions.
For example, the code given before causes error when it comes to calculating the prediction of y with this function:
def linear_function(x, w, b):
y_est = tf.add(tf.matmul(w, x), b)
return y_est
The error is:
ValueError: Shape must be rank 2 but is rank 1 for 'MatMul' (op: 'MatMul') with input shapes: [1,3197], [3197].
I was trying to use tf.reshape with X and y to somehow solve this problem, but it caused errors in other places.
Is it possible to feed the data in feed_dict={X: cur_x, y: cur_y} in "correct" shape?
Or what is the way to properly implement this?
Thanks.
For matrix multiplications, you need to follow the rule of shapes
(a, b) * (b, c) = (a, c)
Which means you do need to reshape it since the shapes in your code are not following it. Showing what error you got after reshape would help.
Hope this gives you some hint
import tensorflow as tf
a = tf.constant([1, 2], shape=[1, 2])
b = tf.constant([7, 8], shape=[2])
print(a.shape) # => (1, 2)
print(b.shape) # => (2,)
sess = tf.Session()
# r = tf.matmul(a, b)
# print(sess.run(r)) # this gives you error
c = tf.reshape(b, [2, 1])
print(c.shape) # => (2, 1)
r = tf.matmul(a, c)
foo = tf.reshape(r, [1])
foo = sess.run(foo)
print(foo) # this gives you [23]

Tensorflow cannot feed the value of shape(1,) for Tensor 'x:0' which has the shape '(?, 128)'

I just browsed through Stack Overflow and other forums but couldn't find anything helpful for my problem. But it seems related to this question.
I currently have a trained model of Tensorflow (128 inputs and 11 outputs) which I saved, following the MNIST tutorial by Tensorflow.
It seemed to be successful and I have a model in this folder now with the 3 files (.meta, .ckpt.data and .index). However, I want to restore it and use it for predictions:
#encoding[0] => numpy ndarray (128, ) # anyway a list with only one entry
#unknowndata = np.array(encoding[0])[None]
unknowndata = np.expand_dims(encoding[0], axis=0)
print(unknowndata.shape) # Output (1, 128)
# Restore pre-trained tf model
with tf.Session() as sess:
#saver.restore(sess, "models/model_1.ckpt")
saver = tf.train.import_meta_graph('models/model_1.ckpt.meta')
saver.restore(sess,tf.train.latest_checkpoint('models/./'))
y = tf.get_collection('final tensor') # tf.nn.softmax(tf.matmul(y2, W3) + b3)
X = tf.get_collection('input') # tf.placeholder(tf.float32, [None, 128])
# W1 = tf.get_collection('vars')[0]
# b1 = tf.get_collection('vars')[1]
# W2 = tf.get_collection('vars')[2]
# b2 = tf.get_collection('vars')[3]
# W3 = tf.get_collection('vars')[4]
# b3 = tf.get_collection('vars')[5]
# y1 = tf.nn.relu(tf.matmul(X, W1) + b1)
# y2 = tf.nn.relu(tf.matmul(y1, W2) + b2)
# yLog = tf.matmul(y2, W3) + b3
# y = tf.nn.softmax(yLog)
prediction = tf.argmax(y, 1)
print(sess.run(prediction, feed_dict={i: d for i,d in zip(X, unknowndata.T)}))
# also had sess.run(prediction, feed_dict={X: unknowndata.T}) and also not transposed, still errors
# Output: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # one should be 1 obviously with a specific percentage
There I only run in problems ...
ValueError: Cannot feed value of shape (1,) for Tensor 'x:0', which has shape '(?, 128)'
Altough I print the shape of the 'unknowndata' and it matches the (1, 128).
I also tried it with
sess.run(prediction, feed_dict={X: unknownData})) # with transposed etc. but nothing worked for me there I got the other error
TypeError: unhashable type: 'list'
I only want some predictions of this beautiful Tensorflow trained model.
I figured out the problem!
First I need to restore all of the values (weights and biases and matmul them seperately).
Second I need to create the same input as in the trained model, in my case:
X = tf.placeholder(tf.float32, [None, 128])
and then just call the prediction:
sess.run(prediction, feed_dict={X: unknownData})
But I do not get any percentage distribution but I expect that due to the softmax function. Does anybody know how to access those?
The prediction tensor is obtained by an argmax on y. Instead of returning only prediction, you can add y to your output feed when you execute sess.run.
output_feed = [prediction, y]
preds, probs = sess.run(output_feed, print(sess.run(prediction, feed_dict={i: d for i,d in zip(X, unknowndata.T)}))
preds will have the predictions of the model andprobs will have the probability scores.
First when you save, you have to add to the collection the placeholders you need tf.add_to_collection('i', i) and then retrieve them and pass them the feed_dict.
In your example is "i":
i = tf.get_collection('i')[0]
#sess.run(prediction, feed_dict={i: d for i,d in zip(X, unknowndata.T)})

Categories