Here is my code:
import tensorflow as tf
tf.reset_default_graph()
x = tf.placeholder(tf.float32, [None, 3],name='x')
W_1 = tf.get_variable('W_1', [3,3], dtype = tf.float32, initializer=tf.constant_initializer(1.0))
layer_out = tf.matmul(x, W_1, name = 'layer_out')
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sess.run([tf.gradients(layer_out, [x])], feed_dict = {x: np.array([[1,7,5]])} )
it returns:
[[array([[3., 3., 3.]], dtype=float32)]]
I am expecting to get 3 by 3 matrix or as per tf.gradients docs list of dim 3 with 3 elements for each list entry.
What I am missing?
UPDATE:
I see in docs tf.gradients
A list of sum(dy/dx) for each x in xs
but why sum and how do I get all entries of Jacobian?
Related
I am trying to implement part of the code on Graph Convolutional Networks given in this article. I notice that the author uses tf.eye() with no shape parameter. When I tried to rerun the same code, using tensorflow 1, it gave me the expected error that TypeError: eye() missing 1 required positional argument: 'num_rows'
Can someone explain how the tf.eye() in the article works and/or if there was another way to initialize an identity matrix with unspecified shape?
Here is the code (compatible with tensorflow 1 coz apprently tensorflow2 doesn't have tf.placeolder())
import numpy as np
import networkx as nx
import tensorflow as tf
features= tf.placeholder(tf.float32, shape=[None, 2])
adjacency= tf.placeholder(tf.float32, shape=[None, 2])
degree= tf.placeholder(tf.float32, shape=[None, 2])
labels= tf.placeholder(tf.float32, shape=[None, 2])
weights= tf.Variable(tf.random.normal([], 0, 1, tf.float32, seed=1))
def layer(features, adjacency, degree, weights):
with tf.name_scope('gcn_layer'):
d_ =tf.pow(degree + tf.eye(), -0.5)
y = tf.matmul(d_, tf.matmul(adjacency, d_))
kernel = tf.matmul(features, weights)
return tf.nn.relu(tf.matmul(y,kernel))
model = layer(features, adjacency, degree, weights)
with tf.name_scope('loss'):
loss =tf.reduce_mean(
tf.nn.somftmax_crosse_ntropy_with_logits(
logits=model, labels=labels))
train_op=train.AdamOptimizer(0.001, 0.9).minimize(loss)
with tf.Session() as sess:
sess.run(train_op, feed_dict={
features:features, adjacency:adjacency, degree:degree, labels:labels})
tf.eye() is used to create a identity matrix.
The correct usage of tf.eye() is:
Code:
tf.eye(
num_rows, num_columns=None, batch_shape=None, dtype=tf.dtypes.float32, name=None
)
num_rows is the number of rows for your identity matrix. So If you want to create an identity matrix of shape : (2,2) you have to specify num_rows = 2
Example Usage:
tf.eye(2)
==> [[1., 0.],
[0., 1.]]
I am training an autoencoder by giving 2 placeholders that store the following:
x1 = [x1]
X = [x1,x2,x3...xn]
It holds that:
y1 = W*x1 + b_encoding1
Therefore, I have a variable named b_encoder1 (the b)
(When I print it I get: <tf.Variable 'b_encoder1:0' shape=(10,) dtype=float32_ref>)
But it also holds that:
Y = W*X + b_encoding1
The size of the second b_encoding1 has to be (10,n) intead of (10,). How can I augment it and pass it in tensorflow?
Y = tf.compat.v1.nn.xw_plus_b(X, W1, b_encoder1, name='Y')
The whole code looks like this:
x1 = tf.compat.v1.placeholder( tf.float32, [None,input_shape], name = 'x1')
X = tf.compat.v1.placeholder( tf.float32, [None,input_shape,sp], name = 'X')
W1 = tf.Variable(tf.initializers.GlorotUniform()(shape=[input_shape,code_length]),name='W1')
b_encoder1 = tf.compat.v1.get_variable(name='b_encoder1',shape=[code_length],initializer=tf.compat.v1.initializers.zeros(), use_resource=False)
K = tf.Variable(tf.initializers.GlorotUniform()(shape=[code_length,code_length]),name='K')
b_decoder1 = tf.compat.v1.get_variable(name='b_decoder1',shape=[input_shape],initializer=tf.compat.v1.initializers.zeros(), use_resource=False)
y1 = tf.compat.v1.nn.xw_plus_b(x1, W1, b_encoder1, name='y1')
Y = tf.compat.v1.nn.xw_plus_b(X, W1, b_encoder1, name='Y')
I also declare the loss function and so on and then train with:
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i in range(number_of_batches):
batch_data = getBatch(shuffled_data, batch_i, batch_size)
sess.run(optimizer, feed_dict={x1: batch_data[:,:,0], X: batch_data})
train_loss = sess.run(loss, feed_dict={x1: aug_data[:,:,0], X: aug_data})
print(epoch_i, train_loss)
You can consider X as a batch of x. X can take in an arbitrary number of samples:
import tensorflow as tf
import numpy as np
X = tf.placeholder(shape=(None, 100), dtype=tf.float32)
W = tf.get_variable('kernel', [100,10])
b = tf.get_variable('bias',[10])
Y = tf.nn.xw_plus_b(X, W,b, name='Y')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer()) # tf version < 1.13
out = sess.run(Y, {X: np.random.rand(128, 100)}) # here n=128
Note that dimension of bias b is still 10-D regardless value of n.
Please try:
b_encoding1 = tf.expand_dims(b_encoding1, axis = 1)
I am trying to use scatter_update to update a slice of a tensor. My first code snippet to get familiar with the function works out perfectly fine.
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
init_val = tf.Variable(tf.zeros((3, 2)))
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update))
But when I try to feed the initial value into the graph like
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update, feed_dict={x: np.zeros((3, 2))}))
I get the strange error
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [3,2]
[[{{node Placeholder_1}} = Placeholder[dtype=DT_FLOAT, shape=[3,2], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Dropping the tf.Variable around x when assigning it to init_val also does not help since I am getting the error
AttributeError: 'Tensor' object has no attribute '_lazy_read'
(see this entry on Github). Has anyone an idea? Thanks in advance!
I am using Tensorflow 1.12 on CPU.
You can replace in a tensor through scattering by building and update tensor and a mask tensor:
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
x_shape = tf.shape(x)
indices = tf.expand_dims(indices, 1)
replacement = tf.ones((2, 2))
update = tf.scatter_nd(indices, replacement, x_shape)
mask = tf.scatter_nd(indices, tf.ones_like(replacement, dtype=tf.bool), x_shape)
result = tf.where(mask, update, x)
print(sess.run(result, feed_dict={x: np.arange(6).reshape((3, 2))}))
Output:
[[1. 1.]
[1. 1.]
[4. 5.]]
I have wrote a simple code to try out the Tensorflow summarize feature. The code is below.
import tensorflow as tf
import numpy as np
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32, [1, 2], name='x')
W = tf.ones([2, 1], tf.float32, name='W')
b = tf.constant([1.5], dtype=tf.float32, shape=(1, 1), name='bias')
y_ = tf.add(tf.matmul(x, W, name='mul'), b, name='add')
tf.summary.scalar('y', y_)
with tf.Session(graph=graph) as session:
merged = tf.summary.merge_all()
fw = tf.summary.FileWriter("/tmp/tensorflow/logs", graph=graph)
tf.global_variables_initializer().run()
x_var = np.array([1., 1.], np.float32).reshape([1, 2])
print(x_var)
summary, y = session.run([merged, y_], feed_dict={x: x_var})
fw.add_summary(summary, 0)
print(y)
fw.close()
Basically, it tries to implement y=Wx + b.
The code works if I remove all the summary related code. But if I add the summary related code, I got below error:
InvalidArgumentError (see above for traceback): tags and values not the same shape: [] != [1,1] (tag 'y')
[[Node: y = ScalarSummary[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](y/tags, add)]]
I tried in both normal python, and IPython.
Tags and values do not have the same shape. You are passing x_var which is a vector and the summary takes a scalar value. You can simply use tf.reduce_mean to solve this problem:
with graph.as_default():
x = tf.placeholder(tf.float32, [None, 2], name='x')
W = tf.ones([2, 1], tf.float32, name='W')
b = tf.constant([1.5], dtype=tf.float32, shape=(1, 1), name='bias')
y_ = tf.add(tf.matmul(x, W, name='mul'), b, name='add')
tf.summary.scalar('y', tf.reduce_mean(y_))
This will create a scalar value.
I got an error,InvalidArgumentError like
InvalidArgumentError (see above for traceback): You must feed a value
for placeholder tensor 'Placeholder_1' with dtype float [[Node:
Placeholder_1 = Placeholderdtype=DT_FLOAT, shape=[],
_device="/job:localhost/replica:0/task:0/cpu:0"]]
I cannot understand what is wrong in my code.And I cannot understand the wrong point is setting or code(syntax).
How can I fix this?
I wrote in my whole code
import tensorflow as tf
import numpy as np
input_dim =2
output_dim =1
x = tf.placeholder("float",[None,input_dim])
#重み
W = tf.Variable(tf.random_uniform([input_dim,output_dim],-1.0,1.0))
#バイアス
b = tf.Variable(tf.random_normal([output_dim]))
#シグモイド活性化調節
y = tf.nn.sigmoid(tf.matmul(x,W)+b)
y_ = tf.placeholder("float",[None,output_dim])
loss = tf.reduce_mean(tf.square(y-y_))
train_step = tf.train.MomentumOptimizer(0.01,0.97).minimize(loss)
init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)
for i in range(5000):
batch_xs = np.array([
[0.,0.],
[0.,1.],
[1.,0.],
[1.,1.]
])
batch_ys = np.array([
[0.],
[0.],
[0.],
[1.]
])
sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})
print(i,sess.run(y,feed_dict={x:batch_xs,y:batch_ys}))
You have placeholder x and y_, not x and y.
so in the sess.run() should be sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})