tf.where() doesn't produce result same as np.where() - python

I'm comparing tensorflow to numpy using same logical code.
When implementing tf.where, I can't get the same result as np.where
What is the problem of the code or usage below?
data :
X_batch = np.concatenate([np.arange(10).reshape(1, -1) for i in range(10)], axis=0)
tensorflow tf.where toy code :
X = tf.placeholder(dtype=tf.int32, shape=[10, 10])
with tf.Session() as sess:
print(sess.run(tf.where(X > 5, tf.zeros([10, 10], dtype=tf.int32),
X), feed_dict={X: X_batch}))
numpy np.where toy code :
np.where(X_batch > 5, np.zeros([10,10]), X_batch)
​
Code had some typo errors. correction has done

I edited the code. Input to tf.where() should be as same as np.where(). Therefore, arguments to the tf.where(), you should give zero 10*10 matrix and x_batch as the inputs as your np.where() method arguments.
import tensorflow as tf
import numpy as np
X_batch = np.concatenate([np.arange(10).reshape(1, -1) for i in range(10)], axis=0)
#print(X_batch )
X = tf.placeholder(dtype=tf.int32, shape=[10, 10])
with tf.Session() as sess:
print(sess.run(tf.where(X > 5, tf.fill([10, 10], 0),
X), feed_dict={X: X_batch}))
Hope this helps.

Related

Using scatter_update with feeded data in tensorflow

I am trying to use scatter_update to update a slice of a tensor. My first code snippet to get familiar with the function works out perfectly fine.
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
init_val = tf.Variable(tf.zeros((3, 2)))
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update))
But when I try to feed the initial value into the graph like
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
update = tf.scatter_update(init_val, indices, tf.ones((2, 2)))
init = tf.global_variables_initializer()
sess.run(init)
print(sess.run(update, feed_dict={x: np.zeros((3, 2))}))
I get the strange error
InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_1' with dtype float and shape [3,2]
[[{{node Placeholder_1}} = Placeholder[dtype=DT_FLOAT, shape=[3,2], _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]
Dropping the tf.Variable around x when assigning it to init_val also does not help since I am getting the error
AttributeError: 'Tensor' object has no attribute '_lazy_read'
(see this entry on Github). Has anyone an idea? Thanks in advance!
I am using Tensorflow 1.12 on CPU.
You can replace in a tensor through scattering by building and update tensor and a mask tensor:
import tensorflow as tf
import numpy as np
with tf.Session() as sess:
x = tf.placeholder(tf.float32, shape=(3, 2))
init_val = x
indices = tf.constant([0, 1])
x_shape = tf.shape(x)
indices = tf.expand_dims(indices, 1)
replacement = tf.ones((2, 2))
update = tf.scatter_nd(indices, replacement, x_shape)
mask = tf.scatter_nd(indices, tf.ones_like(replacement, dtype=tf.bool), x_shape)
result = tf.where(mask, update, x)
print(sess.run(result, feed_dict={x: np.arange(6).reshape((3, 2))}))
Output:
[[1. 1.]
[1. 1.]
[4. 5.]]

How do I use tf.reshape()?

import tensorflow as tf
import random
import numpy as np
x = tf.placeholder('float')
x = tf.reshape(x, [-1,28,28,1])
with tf.Session() as sess:
x1 = np.asarray([random.uniform(0,1) for i in range(784)])
result = sess.run(x, feed_dict={x: x1})
print(result)
I had some problems using mnist data on reshaping, but this question is simplified version of my problem...
Why actually isn't this code working?
It shows
"ValueError: Cannot feed value of shape (784,) for Tensor 'Reshape:0', which has shape '(?, 28, 28, 1)' ".
How could I solve it?
After you reassign, x is a tensor with shape [-1,28,28,1] and as error says, you cannot shape (784,) to (?, 28, 28, 1). You can use a different variable name:
import tensorflow as tf
import random
import numpy as np
x = tf.placeholder('float')
y = tf.reshape(x, [-1,28,28,1])
with tf.Session() as sess:
x1 = np.asarray([random.uniform(0,1) for i in range(784)])
result = sess.run(y, feed_dict={x: x1})
print(result)
Conceptually
You get error here because when you are using sess.run(x, feed_dict{x:x1}). This is trying to feed and reshape same variable. This creates a problem in run time. Thus you cannot do this using a single variable.
import tensorflow as tf
import random
import numpy as np
x = tf.placeholder('float')
y = tf.reshape(x, [-1,28,28,1])
with tf.Session() as sess:
x1 = np.asarray([random.uniform(0,1) for i in range(784)])
result = sess.run(y, feed_dict={x: x1})
print(result)
In tensorflow, variables are place holders. So, x will hold floating point values, and another variable say y will hold values in shape [-1,28,28,1].
If same variable name is used then it has to act as placeholder for two things. This is not possible.

Tensorflow, errors with dataset: "Cannot convert a DataFrame into a Tensor or Operation."

I am trying to create a Neural Network with Tensorflow and I am trying to use a pandas dataframe as my data. This gives me an error saying that I cannot convert a dataframe into a Tensor. I thought that passing the dataframe through numpy.asarray() should have fixed this error but I still get the error.
This is my code:
import numpy as np
import tensorflow as tf
import pandas as pd
dataframe = pd.read_csv('data.csv')
dataframe.drop(dataframe.columns.difference(["Happiness.Score", "Freedom", "Family", "Generosity"]), 1, inplace=True)
train = dataframe[1:11]
test = dataframe[12:22]
test.pop("Happiness.Score")
dataY = np.asarray(train["Happiness.Score"])
dataX = np.asarray(train.drop(["Happiness.Score"], axis=1))
inputX = tf.placeholder(tf.float32, [10, 3])
inputY = tf.placeholder(tf.float32, [10])
W = tf.Variable(tf.zeros([3, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.nn.softmax(tf.matmul(inputX, W) + b)
cross_entropy = tf.reduce_sum(y * tf.log(inputY))
optimizer = tf.train.GradientDescentOptimizer(.01)
trainer = optimizer.minimize(cross_entropy)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for step in range(1000):
sess.run(train, feed_dict={inputX: dataX, inputY: dataY})
print(sess.run(cross_entropy, feed_dict={inputX: dataX, inputY: dataY}))
sess.close()
This throws the error
has invalid type , must be a string or Tensor. (Can not convert a DataFrame into a Tensor or Operation.)
Any ideas on how to fix this?
Did you mean to use this line ?
sess.run(trainer, feed_dict={inputX: dataX, inputY: dataY})
You are using this line now.
sess.run(train, feed_dict={inputX: dataX, inputY: dataY})

Tensorflow: Selecting items from one tensor by another tensor

I have a value tensor and a reordering tensor. Reordering tensor gives ordering for each row in value tensor. How can I use this reordering tensor to actually reorder values in the value tensor.
This gives the desired result in numpy (Indexing one array by another in numpy):
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
result = values[np.arange(values.shape[0])[:,None],reorder_rows]
print(result)
# [[ 4 100 5]
# [ 10 500 20]]
How can I do the same in tf?
I have tried to play with slicing and tf.gather_nd but can't make it work.
Thanks.
Try the following:
import numpy as np
values = np.array([
[5,4,100],
[10,20,500]
])
reorder_rows = np.array([
[1,2,0],
[0,2,1]
])
import tensorflow as tf
values = tf.constant(values)
reorder_rows = tf.constant(reorder_rows, dtype=tf.int32)
x = tf.tile(tf.range(tf.shape(values)[0])[:,tf.newaxis], [1,tf.shape(values)[1]])
res = tf.gather_nd(values, tf.stack([x, reorder_rows], axis=-1))
sess = tf.InteractiveSession()
res.eval()
The following tf code should give the same result:
values = tf.constant([
[5,4,100],
[10,20,500]
])
reorder_rows = tf.constant([
[[0,1],[0,2],[0,0]],
[[1,0],[1,2],[1,1]]
])
result = tf.gather_nd(values, reorder_rows)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
result.eval()
#Result
#[[ 4, 100, 5],
#[ 10, 500, 20]]

Using Sparse Matrix Arguments in a Tensorflow Function

I am new to Tensorflow. I am trying to write a function in python using Tensorflow that operates on a sparse matrix input. Normally I would define a tensorflow placeholder, but apparently there is no placeholder for sparse matrices.
What is the proper way to define a function that operates on sparse data in tensorflow and pass values into it?
Specifically, I am trying to rewrite the fundamental example of a multilayer perceptron, found here https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/multilayer_perceptron.py, to accept sparse input instead of dense.
As a dummy example, how would you write a function that looks something like this?
import tensorflow as tf
x = tf.placeholder("sparse")
y = tf.placeholder("float", [None, n_classes])
# Create model
def sparse_multiply(x, y):
outlayer = tf.sparse_tensor_dense_matmul(x, y)
return out_layer
pred = multiply(x, y)
# Launch the graph
with tf.Session() as sess:
result = sess.run(pred, feed_dict={x: x_input, y: y_input})
Someone at the link https://github.com/tensorflow/tensorflow/issues/342 recommended, as a workaround, passing in the elements needed to construct the sparse matrix and then creating the sparse matrix on the fly within the function. That seems a little hacky, and I get errors when I try to construct it that way.
Any help, especially answers with code, would be greatly appreciated!
I think I figured it out. The suggestion I linked to actually did work, I just needed to correct all the inputs to have consistent types. Here is the dummy example I listed in the question, coded correctly:
import tensorflow as tf
import sklearn.feature_extraction
import numpy as np
def convert_csr_to_sparse_tensor_inputs(X):
coo = X.tocoo()
indices = np.mat([coo.row, coo.col]).transpose()
return indices, coo.data, coo.shape
X = ____ #Some sparse 2 x 8 csr matrix
y_input = np.asarray([1, 1, 1, 1, 1, 1, 1, 1])
y_input.shape = (8,1)
x_indices, x_values, x_shape = convert_csr_to_sparse_tensor_inputs(X)
# tf Graph input
y = tf.placeholder(tf.float64)
values = tf.placeholder(tf.float64)
indices = tf.placeholder(tf.int64)
shape = tf.placeholder(tf.int64)
# Create model
def multiply(values, indices, shape, y):
x_tensor = tf.SparseTensor(indices, values, shape)
out_layer = tf.sparse_tensor_dense_matmul(x_tensor, y)
return out_layer
pred = multiply(values, indices, shape, y)
# Launch the graph
with tf.Session() as sess:
result = sess.run(pred, feed_dict={values: x_values, indices: x_indices, shape: x_shape, y: y_input})

Categories