Using scatter_update with feeded data in tensorflow - python

I am trying to use scatter_update to update a slice of a tensor. My first code snippet to get familiar with the function works out perfectly fine.
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
init_val = tf.Variable(tf.zeros((3, 2)))
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update))
But when I try to feed the initial value into the graph like
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update, feed_dict={x: np.zeros((3, 2))}))
I get the strange error
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [3,2]
[[{{node Placeholder_1}} = Placeholder[dtype=DT_FLOAT, shape=[3,2], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Dropping the tf.Variable around x when assigning it to init_val also does not help since I am getting the error
AttributeError: 'Tensor' object has no attribute '_lazy_read'
(see this entry on Github). Has anyone an idea? Thanks in advance!
I am using Tensorflow 1.12 on CPU.

You can replace in a tensor through scattering by building and update tensor and a mask tensor:
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
x_shape = tf.shape(x)
indices = tf.expand_dims(indices, 1)
replacement = tf.ones((2, 2))
update = tf.scatter_nd(indices, replacement, x_shape)
mask = tf.scatter_nd(indices, tf.ones_like(replacement, dtype=tf.bool), x_shape)
result = tf.where(mask, update, x)
print(sess.run(result, feed_dict={x: np.arange(6).reshape((3, 2))}))
Output:
[[1. 1.]
[1. 1.]
[4. 5.]]

Related

How to initialize an identity matrix in tensorflow with no specified parameters?

I am trying to implement part of the code on Graph Convolutional Networks given in this article. I notice that the author uses tf.eye() with no shape parameter. When I tried to rerun the same code, using tensorflow 1, it gave me the expected error that TypeError: eye() missing 1 required positional argument: 'num_rows'
Can someone explain how the tf.eye() in the article works and/or if there was another way to initialize an identity matrix with unspecified shape?
Here is the code (compatible with tensorflow 1 coz apprently tensorflow2 doesn't have tf.placeolder())
import numpy as np
import networkx as nx
import tensorflow as tf
features= tf.placeholder(tf.float32, shape=[None, 2])
adjacency= tf.placeholder(tf.float32, shape=[None, 2])
degree= tf.placeholder(tf.float32, shape=[None, 2])
labels= tf.placeholder(tf.float32, shape=[None, 2])
weights= tf.Variable(tf.random.normal([], 0, 1, tf.float32, seed=1))
def layer(features, adjacency, degree, weights):
with tf.name_scope('gcn_layer'):
d_ =tf.pow(degree + tf.eye(), -0.5)
y = tf.matmul(d_, tf.matmul(adjacency, d_))
kernel = tf.matmul(features, weights)
return tf.nn.relu(tf.matmul(y,kernel))
model = layer(features, adjacency, degree, weights)
with tf.name_scope('loss'):
loss =tf.reduce_mean(
tf.nn.somftmax_crosse_ntropy_with_logits(
logits=model, labels=labels))
train_op=train.AdamOptimizer(0.001, 0.9).minimize(loss)
with tf.Session() as sess:
sess.run(train_op, feed_dict={
features:features, adjacency:adjacency, degree:degree, labels:labels})
tf.eye() is used to create a identity matrix.
The correct usage of tf.eye() is:
Code:
tf.eye(
num_rows, num_columns=None, batch_shape=None, dtype=tf.dtypes.float32, name=None
)
num_rows is the number of rows for your identity matrix. So If you want to create an identity matrix of shape : (2,2) you have to specify num_rows = 2
Example Usage:
tf.eye(2)
==> [[1., 0.],
[0., 1.]]

tf.gradients - dimensions of output

Here is my code:
import tensorflow as tf
tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, 3],name='x')
W_1 = tf.get_variable('W_1', [3,3], dtype = tf.float32, initializer=tf.constant_initializer(1.0))
layer_out = tf.matmul(x, W_1, name = 'layer_out')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run([tf.gradients(layer_out, [x])], feed_dict = {x: np.array([[1,7,5]])} )
it returns:
[[array([[3., 3., 3.]], dtype=float32)]]
I am expecting to get 3 by 3 matrix or as per tf.gradients docs list of dim 3 with 3 elements for each list entry.
What I am missing?
UPDATE:
I see in docs tf.gradients
A list of sum(dy/dx) for each x in xs
but why sum and how do I get all entries of Jacobian?

Implementation of a neural model on Tensor-flow

I am trying to implement a neural network model on Tensor flow but seems to be having problems with the shape of the placeholders. I'm new to TF, hence it could just be a simple misunderstanding. Here's my code and data sample:
_data=[[0.4,0.5,0.6,1],[0.7,0.8,0.9,0],....]
The data comprises of arrays of 4 columns, the last column of each array is the label. I want to classify each array as label 0, label 1 or label 2.
import tensorflow as tf
import numpy as np
_data = datamatrix
X = tf.placeholder(tf.float32, [None, 3])
W = tf.Variable(tf.zeros([3, 1]))
b = tf.Variable(tf.zeros([3]))
init = tf.global_variables_initializer()
Y = tf.nn.softmax(tf.matmul(X, W) + b)
# placeholder for correct labels
Y_ = tf.placeholder(tf.float32, [None, 1])
# loss function
import time
start=time.time()
cross_entropy = -tf.reduce_sum(Y_ * tf.log(Y))
# % of correct answers found in batch
is_correct = tf.equal(tf.argmax(Y,1), tf.argmax(Y_,1))
accuracy = tf.reduce_mean(tf.cast(is_correct, tf.float32))
optimizer = tf.train.GradientDescentOptimizer(0.003)
train_step = optimizer.minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
for i in range(1000):
# load batch of images and correct answers
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1] for x in _data[:2000]]
train_data={X: batch_X, Y_: batch_Y}
# train
sess.run(train_step, feed_dict=train_data)
# success ?
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I got the following error message after running my code:
ValueError: Cannot feed value of shape (2000,) for Tensor 'Placeholder_1:0', which has shape '(?, 1)'
My desired output should be the performance of the model using cross-entropy; the accuracy value from the codeline below:
a,c = sess.run([accuracy, cross_entropy], feed_dict=train_data)
I would also appreciate any suggestions on how to improve the model, or a model that is more suitable for my data.
The shape of Placeholder_1:0 Y_, and input data batch_Y is mismatched as specified by the error message. Notice the 1-D vs 2-D array.
So you should either define 1-D place holder:
Y_ = tf.placeholder(tf.float32, [None])
or prepare 2-D data:
batch_X, batch_Y = [x[:3] for x in _data[:2000]],[x[-1:] for x in _data[:2000]]

InvalidArgumentError when using summary in Tensorflow v1.2.1

I have wrote a simple code to try out the Tensorflow summarize feature. The code is below.
import tensorflow as tf
import numpy as np
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32, [1, 2], name='x')
W = tf.ones([2, 1], tf.float32, name='W')
b = tf.constant([1.5], dtype=tf.float32, shape=(1, 1), name='bias')
y_ = tf.add(tf.matmul(x, W, name='mul'), b, name='add')
tf.summary.scalar('y', y_)
with tf.Session(graph=graph) as session:
merged = tf.summary.merge_all()
fw = tf.summary.FileWriter("/tmp/tensorflow/logs", graph=graph)
tf.global_variables_initializer().run()
x_var = np.array([1., 1.], np.float32).reshape([1, 2])
print(x_var)
summary, y = session.run([merged, y_], feed_dict={x: x_var})
fw.add_summary(summary, 0)
print(y)
fw.close()
Basically, it tries to implement y=Wx + b.
The code works if I remove all the summary related code. But if I add the summary related code, I got below error:
InvalidArgumentError (see above for traceback): tags and values not the same shape: [] != [1,1] (tag 'y')
[[Node: y = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](y/tags, add)]]
I tried in both normal python, and IPython.
Tags and values do not have the same shape. You are passing x_var which is a vector and the summary takes a scalar value. You can simply use tf.reduce_mean to solve this problem:
with graph.as_default():
x = tf.placeholder(tf.float32, [None, 2], name='x')
W = tf.ones([2, 1], tf.float32, name='W')
b = tf.constant([1.5], dtype=tf.float32, shape=(1, 1), name='bias')
y_ = tf.add(tf.matmul(x, W, name='mul'), b, name='add')
tf.summary.scalar('y', tf.reduce_mean(y_))
This will create a scalar value.

InvalidArgumentError Is setting or code wrong?

I got an error,InvalidArgumentError like
InvalidArgumentError (see above for traceback): You must feed a value
for placeholder tensor 'Placeholder_1' with dtype float [[Node:
Placeholder_1 = Placeholderdtype=DT_FLOAT, shape=[],
_device="/job:localhost/replica:0/task:0/cpu:0"]]
I cannot understand what is wrong in my code.And I cannot understand the wrong point is setting or code(syntax).
How can I fix this?
I wrote in my whole code
import tensorflow as tf
import numpy as np
input_dim =2
output_dim =1
x = tf.placeholder("float",[None,input_dim])
#重み
W = tf.Variable(tf.random_uniform([input_dim,output_dim],-1.0,1.0))
#バイアス
b = tf.Variable(tf.random_normal([output_dim]))
#シグモイド活性化調節
y = tf.nn.sigmoid(tf.matmul(x,W)+b)
y_ = tf.placeholder("float",[None,output_dim])
loss = tf.reduce_mean(tf.square(y-y_))
train_step = tf.train.MomentumOptimizer(0.01,0.97).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(5000):
batch_xs = np.array([
[0.,0.],
[0.,1.],
[1.,0.],
[1.,1.]
])
batch_ys = np.array([
[0.],
[0.],
[0.],
[1.]
])
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
print(i,sess.run(y,feed_dict={x:batch_xs,y:batch_ys}))
You have placeholder x and y_, not x and y.
so in the sess.run() should be sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})

Categories