create a list of matrices in Tensorflow - python

I am new in python and Tensorflow, and I want to initialize k matrices(let say k=10) each one is 300X300, I wrote this line but I'm not sure is this the right way or not
R = tf.Variable(tf.random_normal(shape=(self.k, 300, 300)),name="R")
I will appreciate any help.

That's the right way, but be careful about that the variable is NOT initialized yet. It gets initialized when you actually run a initializer like:
R = tf.Variable(tf.random_normal(shape=(10, 300, 300)),name="R")
init_op = tf.global_variables_initializer()
with tf.Seesion() as sess:
sess.run(init_op) # now R gets initialized
r = sess.run(R) # load value of R as numpy array r
# ... check r's value ...

Related

can we feed tf.variable to tf.placeholder in feed_dict?

I want to do a simple task with tensorflow. but i am geting one error
import numpy as np
import pandas as pd
fv = tf.Variable(10.0,name="first_var")
sv = tf.Variable(20.0,np.random.randn(),name="second_var")
fvp = tf.placeholder("float32",name="first_fvp",shape=[])
svp = tf.placeholder("float32",name="second_svp",shape=[])
result = tf.Variable(0.0,name="output")
result = np.multiply(fvp,svp)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print(sess.run(result,feed_dict={fvp:fv,svp:sv}))
error = setting an array element with a sequence.
In this case, I am getting an error
and if I use
print(sess.run(result,feed_dict={fvp:5.0,svp:10.0}))
I am getting output 50.0
First, I still don't quite understand what is your question is. It seems that you've solved that error already. Please edit if possible.
About that error:
You can not feed Tensor(s) into that feed_dict.
Read tensorflow/python/client/session.py carefully. When you feed some data to feed_dict={}, Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles. In your case, the fv and sv are tensors.
So the second of yours print(sess.run(result,feed_dict={fvp:5.0,svp:10.0})) will work.
you can also try fv = np.array([10.0]), sv = np.array([20.0])
Also, you dont need this result = tf.Variable(0.0,name="output"), if you want to name the output, you can use result = tf.identity(np.multiply(fvp,svp), name="output")

What is not feedable in tensorflow?

I've tried the following code. But I don't find what is not feedable in tensorflow. Could anybody show me what is not feedable?
#!/usr/bin/env python
# vim: set noexpandtab tabstop=2 shiftwidth=2 softtabstop=-1 fileencoding=utf-8:
import tensorflow as tf
x = tf.Variable(3)
y = tf.constant(3)
z = tf.add(1, 2)
with tf.Session() as sess:
print sess.graph.is_feedable(x)
print sess.graph.is_feedable(y)
print sess.graph.is_feedable(z)
All tensors are feedable (including the constants, as you can see), unless they are explicitly prevented from feeding via tf.Graph.prevent_feeding method. One can call this method directly or indirectly, for example, that's what tf.contrib.util.constant_value function does:
NOTE: If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value for tensor. This allows the result of this function to influence the graph that is constructed, and permits static shape optimizations.
Sample code:
y = tf.constant(3)
tf.contrib.util.constant_value(y) # 3
with tf.Session() as sess:
print sess.graph.is_feedable(y) # False!

TensorFlow while_loop converts variable to constant?

I'm trying to update a two dimensional tensor in a nested while_loop(). When passing the variable to the second loop however, I cannot updated it using tf.assign() as it throws this error:
ValueError: Sliced assignment is only supported for variables
Somehow it works fine if I create the variable outside the while_loop and use it only in the first loop.
How can I modify my 2D tf variable in the second while loop?
(I'm using python 2.7 and TensorFlow 1.2)
My code:
import tensorflow as tf
import numpy as np
tf.reset_default_graph()
BATCH_SIZE = 10
LENGTH_MAX_OUTPUT = 31
it_batch_nr = tf.constant(0)
it_row_nr = tf.Variable(0, dtype=tf.int32)
it_col_nr = tf.constant(0)
cost = tf.constant(0)
it_batch_end = lambda it_batch_nr, cost: tf.less(it_batch_nr, BATCH_SIZE)
it_row_end = lambda it_row_nr, cost_matrix: tf.less(it_row_nr, LENGTH_MAX_OUTPUT+1)
def iterate_batch(it_batch_nr, cost):
cost_matrix = tf.Variable(np.ones((LENGTH_MAX_OUTPUT+1, LENGTH_MAX_OUTPUT+1)), dtype=tf.float32)
it_rows, cost_matrix = tf.while_loop(it_row_end, iterate_row, [it_row_nr, cost_matrix])
cost = cost_matrix[0,0] # IS 1.0, SHOULD BE 100.0
return tf.add(it_batch_nr,1), cost
def iterate_row(it_row_nr, cost_matrix):
# THIS THROWS AN ERROR:
cost_matrix[0,0].assign(100.0)
return tf.add(it_row_nr,1), cost_matrix
it_batch = tf.while_loop(it_batch_end, iterate_batch, [it_batch_nr, cost])
sess = tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
out = sess.run(it_batch)
print(out)
tf.Variable objects cannot be used as loop variables in a while loop, as loop variables are implemented differently.
So either create your variable outside the loop and update it yourself with tf.assign in each iteration or manually keep track of the updates as you do with loop variables (by returning their updated values from the loop lambdas, and in your case using the value from the inner loop as the new value for the outer loop).
Got this to work, with #AlexandrePassos help, by placing the Variable outside the while_loop. However, I also had to force the execution of the commands using tf.control_dependencies() (as the operations are not directly used on the loop variable). The loop now looks like this:
cost_matrix = tf.Variable(np.ones((LENGTH_MAX_OUTPUT+1, LENGTH_MAX_OUTPUT+1)), dtype=tf.float32)
def iterate_batch(it_batch_nr, cost):
it_rows = tf.while_loop(it_row_end, iterate_row, [it_row_nr])
with tf.control_dependencies([it_rows]):
cost = cost_matrix[0,0]
return tf.add(it_batch_nr,1), cost
def iterate_row(it_row_nr):
a = tf.assign(cost_matrix[0,0], 100.0)
with tf.control_dependencies([a]):
return tf.add(it_row_nr,1)

Avoid cluttering the tensorflow graph with assign operations

I have to run something like the following code
import tensorflow as tf
sess = tf.Session()
x = tf.Variable(42.)
for i in range(10000):
sess.run(x.assign(42.))
sess.run(x)
print(i)
several times. The actual code is much more complicated and uses more variables.
The problem is that the TensorFlow graph grows with each instantiated assign op, which makes the graph grow, eventually slowing down the computation.
I could use feed_dict= to set the value, but I would like to keep my state in the graph, so that I can easily query it in other places.
Is there some way of avoiding cluttering the current graph in this case?
I think I've found a good solution for this:
I define a placeholder y and create an op that assigns the value of y to x.
I can then use that op repeatedly, using feed_dict={y: value} to assign a new value to x.
This doesn't add another op to the graph.
It turns out that the loop runs much more quickly than before as well.
import tensorflow as tf
sess = tf.Session()
x = tf.Variable(42.)
y = tf.placeholder(dtype=tf.float32)
assign = x.assign(y)
sess.run(tf.initialize_all_variables())
for i in range(10000):
sess.run(assign, feed_dict={y: i})
print(i, sess.run(x))
Each time you call sess.run(x.assign(42.))
two things happen: (i) a new assign operation is added to the computational graph sess.graph, (ii) the newly added operation executes. No wonder the graph gets pretty large if loop repeats many times. If you define assignment operation before execution (asgnmnt_operation in example below), just a single operation is added to the graph so the performance is great:
import tensorflow as tf
x = tf.Variable(42.)
c = tf.constant(42.)
asgnmnt_operation = x.assign(c)
sess = tf.Session()
for i in range(10000):
sess.run(asgnmnt_operation)
sess.run(x)
print(i)

How do I get the current value of a Variable?

Suppose we have a variable:
x = tf.Variable(...)
This variable can be updated during the training process using the assign() method.
What is the best way to get the current value of a variable?
I know we could use this:
session.run(x)
But I'm afraid this would trigger a whole chain of operations.
In Theano, you could just do
y = theano.shared(...)
y_vals = y.get_value()
I'm looking for the equivalent thing in TensorFlow.
The only way to get the value of the variable is by running it in a session. In the FAQ it is written that:
A Tensor object is a symbolic handle to the result of an operation,
but does not actually hold the values of the operation's output.
So TF equivalent would be:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
v = sess.run(x)
print(v) # will show you your variable.
The part with init = global_variables_initializer() is important and should be done in order to initialize variables.
Also, take a look at InteractiveSession if you work in IPython.
In general, session.run(x) will evaluate only the nodes that are necessary to compute x and nothing else, so it should be relatively cheap if you want to inspect the value of the variable.
Take a look at this great answer https://stackoverflow.com/a/33610914/5543198 for more context.
tf.Print can simplify your life!
tf.Print will print the value of the tensor(s) you tell it to print at the moment where the tf.Print line is called in your code when your code is evaluated.
So for example:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
x = tf.Print(x,[x])
x = 2* x
tf.initialize_all_variables()
sess = tf.Session()
sess.run()
[1.0 2.0 ]
because it prints the value of x at the moment when the tf.Print line is. If instead you do
v = x.eval()
print(v)
you will get:
[2.0 4.0 ]
because it will give you the final value of x.
As they cancelled tf.Variable() in tensorflow 2.0.0,
If you want to extract values from a tensor(ie "net"), you can use this,
net.[tf.newaxis,:,:].numpy().

Categories