can we feed tf.variable to tf.placeholder in feed_dict? - python

I want to do a simple task with tensorflow. but i am geting one error
import numpy as np
import pandas as pd
fv = tf.Variable(10.0,name="first_var")
sv = tf.Variable(20.0,np.random.randn(),name="second_var")
fvp = tf.placeholder("float32",name="first_fvp",shape=[])
svp = tf.placeholder("float32",name="second_svp",shape=[])
result = tf.Variable(0.0,name="output")
result = np.multiply(fvp,svp)
sess = tf.Session()
sess.run(tf.global_variables_initializer())
print(sess.run(result,feed_dict={fvp:fv,svp:sv}))
error = setting an array element with a sequence.
In this case, I am getting an error
and if I use
print(sess.run(result,feed_dict={fvp:5.0,svp:10.0}))
I am getting output 50.0

First, I still don't quite understand what is your question is. It seems that you've solved that error already. Please edit if possible.
About that error:
You can not feed Tensor(s) into that feed_dict.
Read tensorflow/python/client/session.py carefully. When you feed some data to feed_dict={}, Acceptable feed values include Python scalars, strings, lists, numpy ndarrays, or TensorHandles. In your case, the fv and sv are tensors.
So the second of yours print(sess.run(result,feed_dict={fvp:5.0,svp:10.0})) will work.
you can also try fv = np.array([10.0]), sv = np.array([20.0])
Also, you dont need this result = tf.Variable(0.0,name="output"), if you want to name the output, you can use result = tf.identity(np.multiply(fvp,svp), name="output")

Related

How to build TF tensor with ones in specified locations - batch compatible

I apologize for the poor question title but I'm not sure quite how to phrase it. Here's the problem I'm trying to solve: I have two NNs working off of the same input dataset in my code. One of them is a traditional network while the other is used to limit the acceptable range of the first. This works by using a tf.where() statement which works fine in most cases, such as this toy example:
pcts= [0.04,0.06,0.06,0.06,0.06,0.06,0.06,0.04,0.04,0.04]
legal_actions = tf.where(pcts>=0.05, tf.ones_like(pcts), tf.zeros_like(pcts))
Which gives the correct result: legal_actions = [0,1,1,1,1,1,1,0,0,0]
I can then multiply this by the output of my first network to limit its Q values to only those of the legal actions. In a case like the above this works great.
However, it is also possible that my original vector looks something like this, with low values in the middle of the high values: pcts= [0.04,0.06,0.06,0.04,0.04,0.06,0.06,0.04,0.04,0.04]
Using the same code as above my legal_actions comes out as this: legal_actions = [0,1,1,0,0,1,1,0,0,0]
Based on the code I have this is correct, however, I'd like to include any zeros in the middle as part of my legal_actions. In other words, I'd like this second example to be the same as the first. Working in basic TF this is easy to do in several different ways, such as in this reproducible example (it's also easy to do with sparse tensors):
import tensorflow as tf
pcts= tf.placeholder(tf.float32, shape=(10,))
legal_actions = tf.where(pcts>=0.05, tf.ones_like(pcts), tf.zeros_like(pcts))
mask = tf.where(tf.greater(legal_actions,0))
legals = tf.cast(tf.range(tf.reduce_min(mask),tf.reduce_max(mask)+1),tf.int64)
oh = tf.one_hot(legals,10)
oh = tf.reduce_sum(oh,0)
with tf.Session() as sess:
print(sess.run(oh,feed_dict={pcts:[0.04,0.06,0.06,0.04,0.04,0.06,0.06,0.04,0.04,0.04]}))
The problem that I'm running into is when I try to apply this to my actual code which is reading in batches from a file. I can't figure out a way to fill in the "gaps" in my tensor without the range function and/or I can't figure out how to make the range function work with batches (it will only make one range at a time, not one per batch, as near as I can tell). Any suggestions on how to either make what I'm working on work or how to solve the problem a completely different way would be appreciated.
Try this code:
import tensorflow as tf
pcts = tf.random.uniform((2,3,4))
a = pcts>=0.5
shape = tf.shape(pcts)[-1]
a = tf.reshape(a, (-1, shape))
a = tf.cast(a, dtype=tf.float32)
def rng(t):
left = tf.scan(lambda a, x: max(a, x), t)
right = tf.scan(lambda a, x: max(a, x), t, reverse=True)
return tf.minimum(left, right)
a = tf.map_fn(lambda x: rng(x), a)
a = tf.reshape(a, (tf.shape(pcts)))

Statsmodels: vector_ar and IRAnalysis

I'm trying to estimate impulse response functions of a -1 standard-deviation shock to a 3-dimension VAR using statsmodels.tsa, however I'm currently having issues with setting the shock magnitude.
This gives me the IRFs for a 1 s.d. shock, the default:
import numpy as np
import statsmodels.tsa as sm
model = sm.vector_ar.var_model.VAR(endog = data)
fitted = model.fit()
shock= -1*fitted.sigma_u
irf = sm.vector_ar.irf.IRAnalysis(model = fitted)
The function IRAnalysis takes an argument P, an upper diagonal matrix that sets the shocks, I found this looking at the source code. However inputting P as shown below doesn't seem to be doing anything.
irf = statsmodels.tsa.vector_ar.irf.IRAnalysis(model = fitted, P = -np.linalg.cholesky(model.fitted_U))
I would really appreciate some help.
Thanks in advance.
I have had the same question and finally found something that works on my end.
instead of using the IRAnalysis explicitly, I found that transforming the VAR model into it's MA representation was the best way to adjust the size of the shock.
from statsmodels.tsa.vector_ar.irf import IRAnalysis
J = fitted.ma_rep(T)
J = shock*np.array(J)
This will give you the output of the irfs for T periods.
I also wanted the standard error bands on my plots, so I did something similar to that particular function as well.
G, H = fitted.irf_errband_mc(orth=False, repl=1000, steps=T, signif=0.05, seed=None, burn=100, cum=False)
Hope this helps

Batch input from DataFrame in Neural Network

I am executing the following code in the final block of my regression code:
steps = 50000
with tf.Session() as sess:
sess.run(init)
for i in range(steps):
sess.run(train, feed_dict={X_data:X_train,y_target:y_train})
if i%500 == 0:
rand_ind = np.random.random_integers(len(X_test)+1)
feed = {X_data:X_test.iloc[rand_ind:rand_ind+8,:],y_target:y_test.iloc[rand_ind:rand_ind+8,:]}
loss = tf.reduce_sum(tf.square(y_target-y_output))/8
print(sess.run(loss,feed_dict=feed))
Is this a good way to generate smaller batches from a pandas DataFrame or are there better ways to do so?
I am using iloc here, as before I was not able to index properly. Yet I am getting the following error:
DeprecationWarning: This function is deprecated. Please call randint(1, 6193 + 1) instead from ipykernel import kernelapp as app
If you want to select random rows from the dataframe you could use the following code:
import numpy as np
batch = df.iloc[np.random.choice(df.index.values, sample_size)]
This code will select random rows indices and then will select them for the batch. replace sample_size with the size of the batch.
If you will use it multiple times you will create a random sample with return over your data.
If you don't wont to re use the same exmples, you can use this code to sample and then drop the rows you samples and not use them again
import numpy as np
sample = np.random.choice(df.index.values, sample_size)
batch = df.iloc[sample]
newdf = df.drop(sample, axis = 0)

Tensorflow ValueError: setting an array element with a sequence with images

I've looked through many forum sites trying to find out the solution but can't get it.
I am trying to use Tensorflow (Python 3, Win 10 64 bit) with my own set of images. When I run it, I get a ValueError. Specifically:
Traceback (most recent call last):
File "B:\Josh\Programming\Python\imgpredict\predict.py", line 62, in <module>
sess.run(train_step, feed_dict={imgs:batchX, lbls: batchY})
File "C:\Users\Josh\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 789, in run
run_metadata_ptr)
File "C:\Users\Josh\AppData\Local\Programs\Python\Python36\lib\site-packages\tensorflow\python\client\session.py", line 968, in _run
np_val = np.asarray(subfeed_val, dtype=subfeed_dtype)
File "C:\Users\Josh\AppData\Local\Programs\Python\Python36\lib\site-packages\numpy\core\numeric.py", line 531, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: setting an array element with a sequence.
My code is:
import tensorflow as tf
import numpy as np
import os
import sys
import cv2
content = [] # Where images are stored
labels_list = []
########## File opening function
with open("data/cats/files.txt") as ff:
for line in ff:
line = line.rstrip()
content.append(line)
#################################
########## Labels opening function
with open("data/cats/labels.txt") as fff:
for linee in fff:
linee = linee.rstrip()
labels_list.append(linee)
labels_list = np.array(labels_list)
###############################
def create_batches(batch_size):
images1 = []
for img1 in content:
thedata = cv2.imread(img1)
thedata = tf.contrib.layers.flatten(thedata)
images1.append(thedata)
images1 = np.asarray(images1)
images1 = np.array(images1)
while(True):
for i in range(0,298,10):
yield(images1[i:i+batch_size],labels_list[i:i+batch_size])
imgs = tf.placeholder(dtype=tf.float32,shape=[None,262144])
lbls = tf.placeholder(dtype=tf.float32,shape=[None,10])
W = tf.Variable(tf.zeros([262144,10]))
b = tf.Variable(tf.zeros([10]))
y_ = tf.nn.softmax(tf.matmul(imgs,W) + b)
cross_entropy = tf.reduce_mean(-tf.reduce_sum(lbls * tf.log(y_),reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
for i in range(10000):#########################################
for (batchX,batchY) in create_batches(10):
for inn, imgs in enumerate(batchX):
batchX[inn] = imgs.eval()
sess.run(train_step, feed_dict={imgs:batchX, lbls: batchY})
correct_prediction = tf.equal(tf.argmax(y_,1),tf.argmax(lbls,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print(sess.run(accuracy, feed_dict={imgs:content, lbls:labels_list}))
I don't know if the error is from my images, or my labels. I've tried lot's of suggestions from other SO questions, Reddit, Google Plus, GitHub Issues, etc but to no avail. My GitHub link for the project is: https://github.com/supamonkey2000/jm-uofa
and the project folder is "imgpredict"
Any help appreciated. Thanks in advance
In this case, I think you are seeing this error because you are passing a tensorflow object to the feed_dict when you are running the training. It could be a tensorflow object as a result of the flattening method you used:
thedata = tf.contrib.layers.flatten(thedata)
which will return a flattened tensor (more info in the docs), that for some reason isn't being properly evaluated.
Following this answer to get past this issue you need to supply a numpy array to the feed dict. You could instead try:
thedata.flatten()
which will flatten the array to a vector. I tried it and it at least got rid of the error.
Beyond that, like Ofer Sadan pointed out, there are some fundamental issues with your approach. The most obvious one to me is that you are initializing you weight matrix to the image size (512 x 512 = 262144), but since your are loading 3 channel images (RGB color images) you end up with a flattened array three times that size (512 x 512 x 3 channels = 786432) so the training will fail anyway. Try converting to grayscale if the color isn't important to you training data(thedata = cv2.cvtColor(thedata, cv2.COLOR_BGR2GRAY).
I apologize for this isn't a complete answer to the error, but I see many problems with your code that could generate it.
First, is with the create_batches function. You use a list for images1, a tensor for thedata, you append all those tensors to the list and then convert that list to a numpy array. That is very bad practice.
Second problem there - it is supposed to yield both images and labels, but the labels are not processed in that function at all and arrive from the global value. Because of that, I see no reason to assume that they even match the images when you do this:
yield(images1[i:i+batch_size],labels_list[i:i+batch_size])
After all that, it appears that your batchX is a list of tensors, so you again transform each of them to an array (with imgs.eval()). After all that god only knows what the actual shape of the arrays are now, and the error itself is usually an indication that the batchX is not of a proper "rectangular" shape to be converted from a list into an array (for example if one of the elements is an array of a certain length and the others of different length).
My suggestion, rewrite your function, simplify it, don't use tensors in it, and don't use normal lists in there too. It should return a simple numpy array of a shape that fits to sess.run(train_step, feed_dict={imgs:batchX, lbls: batchY})

How do I get the current value of a Variable?

Suppose we have a variable:
x = tf.Variable(...)
This variable can be updated during the training process using the assign() method.
What is the best way to get the current value of a variable?
I know we could use this:
session.run(x)
But I'm afraid this would trigger a whole chain of operations.
In Theano, you could just do
y = theano.shared(...)
y_vals = y.get_value()
I'm looking for the equivalent thing in TensorFlow.
The only way to get the value of the variable is by running it in a session. In the FAQ it is written that:
A Tensor object is a symbolic handle to the result of an operation,
but does not actually hold the values of the operation's output.
So TF equivalent would be:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
v = sess.run(x)
print(v) # will show you your variable.
The part with init = global_variables_initializer() is important and should be done in order to initialize variables.
Also, take a look at InteractiveSession if you work in IPython.
In general, session.run(x) will evaluate only the nodes that are necessary to compute x and nothing else, so it should be relatively cheap if you want to inspect the value of the variable.
Take a look at this great answer https://stackoverflow.com/a/33610914/5543198 for more context.
tf.Print can simplify your life!
tf.Print will print the value of the tensor(s) you tell it to print at the moment where the tf.Print line is called in your code when your code is evaluated.
So for example:
import tensorflow as tf
x = tf.Variable([1.0, 2.0])
x = tf.Print(x,[x])
x = 2* x
tf.initialize_all_variables()
sess = tf.Session()
sess.run()
[1.0 2.0 ]
because it prints the value of x at the moment when the tf.Print line is. If instead you do
v = x.eval()
print(v)
you will get:
[2.0 4.0 ]
because it will give you the final value of x.
As they cancelled tf.Variable() in tensorflow 2.0.0,
If you want to extract values from a tensor(ie "net"), you can use this,
net.[tf.newaxis,:,:].numpy().

Categories