TensorFlow neural network as an API? - python

I am in the process of writing an AI for the game 2048. At the moment, I can pull the game state from the browser and send moves to the game, but I don't know how to integrate that with TensorFlow. The nature of the project isn't conducive to training data, so I was wondering if it's possible to pass in the state of the game, have the network chuck out a move, run the move, repeat until the game is over, and then have it do the training?

This is certainly possible and trivial. You'll have to set up the model you want to use and I will assume that's been built.
From the perspective of interacting with a tensorflow model you just need to marshal your data into numpy arrays to pass in via the feed_dict property of sess.run.
To pass an input to tensorflow and get a result you would run something like this:
result = sess.run([logits], feed_dict={x:input_data})
This would perform a forward pass producing the output of the model without making any update. Now you'll take the results and use them to take the next step in the game.
Now that you have the result of your action (e.g. labels) you can perform an update step:
sess.run([update_op], feed_dict={x:input_data, y:labels})
It's as simple as that. Notice that your model will have an optimizer defined (update_op in this example), but if you don't ask tensorflow to compute it (as in the first code sample) no updates will occur. Tensorflow is all about a dependency graph. The optimizer is dependent on the output logits, but computing logits is not dependent on the optimizer.
Presumably you'll initialize this model randomly, so the first results will be randomly generated, but each step after that will benefit from the previous updates being applied.
If you're using a reinforcement learning model then you would only produce a reward at some indeterminant time in the future and when you run the update would vary a little from this example, but the general nature of the problem remains the same.

Related

Neural Network Regression - Considering a dynamic state

I am using Tensorflow to solve a regression problem with known dynamic components, that is, we are aware that the (singular) label at time t depends on some dynamic state of the environment, but this feature is unknown. The initial attempt to solve the problem via simple regression has, understandably, failed, confirming our assumption that there is some kind of dynamic influence by a feature we have no access to.
However, the state of the environment at time t should be reflected somewhere in the features and labels (and in particular their interplay) known at times t0-n, where n > 0. Unfortunately, because of the nature of the problem, the output at time t heavily depends on the input at time t, about as much as it depends on the dynamic state of the environment. I am worried that this renders the approach I wanted to try ineffective - time series forecasting, in my understanding, would consider features from previous timesteps, but no inputs on the current timestep. Additionally, I know labels from previous timesteps, but not at the time at which I want to make my prediction.
Here is a table to illustrate the problem:
t
input
output
0
x(t=0)
y(t=0)
...
...
...
t0-1
x(t=t0-1)
y(t=t0-1)
t0
x(t=t0)
y(t=t0)=?
How can I use all the information at my disposal to predict the value of y(t=t0), using x(t=t0) (where x is the array of input features) and a defined window of features and labels at previous timesteps?
Is there an established method for solving a problem like this, either using a neural net or perhaps even a different model?
Does this problem require a combination of methods, and if so, which ones might be suitable for tackling it?
The final model is meant to be deployed and continue working for future time windows as well. We know the size of the relevant time window to be roughly 100 time steps into the past.
The kind of problem I have described is, as I have since learned, linked to so-called exogenous variables. In my case, I require something called NNARX, which is similar to the ARMAX model at its core, but (as a neural net) can take non-linearity into account.
The general idea is to introduce an LSTM layer which acts as an Encoder for the historical input, which is then coupled to another input layer with the exogenous variables. Both are coupled at the so-called Decoder - the rest of the NN architecture.

How does PyTorch's loss.backward() work when "retain_graph=True" is specified?

I'm a newbie with PyTorch and adversarial networks. I've tried to look for an answer on the PyTorch documentation and from previous discussions both in the PyTorch and StackOverflow forums, but I couldn't find anything useful.
I'm trying to train a GAN with a Generator and a Discriminator, but I cannot understand if the whole process is working or not. As far as I'm concerned, I should train the Generator first and, then, updating the Discriminator's weights (similarly as this). My code for updating the weights of both models is:
# computing loss_g and loss_d...
optim_g.zero_grad()
loss_g.backward()
optim_g.step()
optim_d.zero_grad()
loss_d.backward()
optim_d.step()
where loss_g is the generator loss, loss_d is the discriminator loss, optim_g is the optimizer referring to the generator's parameters and optim_d is the discriminator optimizer.
If I run the code like this, I get an error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
So I specify loss_g.backward(retain_graph=True), and here comes my doubt: why should I specify retain_graph=True if there are two networks with two different graphs? Am I getting something wrong?
Having two different networks doesn't necessarily mean that the computational graph is different. The computational graph only tracks the operations that were performed from the input to the output and it doesn't matter where the operation takes place. In other words, if you use the output of the first model in the second model (e.g. model2(model1(input))), you have the same sequential operations as if they were part of the same model. In fact, that is no different from having different parts of the model, such as multiple convolutions, that you apply one after the other.
The error you get, indicates that you are trying to backpropagate from the discriminator through the generator, which would mean that the discriminator's output directly adapts the generator's parameters for the discriminator to be successful. In an adversarial setting that is precisely what you want to avoid, they should be independent from each other. By setting retrain_graph=True you incorrectly hide this bug. In nearly all cases retain_graph=True is not the solution and should be avoided.
To resolve that issue, the two models need to be made independent from each other. The crossover between the two models happens when you use the generators output for the discriminator, since it should decide whether that was real or fake. Something along these lines:
fake = generator(noise)
real_prediction = discriminator(real)
# Using the output of the generator, continues the graph.
fake_prediction = discriminator(fake)
Even though fake comes from the generator, as far as the discriminator is concerned, it's merely another input, just like real. Therefore fake should be treated the same as real, where it is not attached to any computational graph. That can easily be done with torch.Tensor.detach, which decouples the tensor from the graph.
fake = generator(noise)
real_prediction = discriminator(real)
# Detach to make it independent of the generator
fake_prediction = discriminator(fake.detach())
That is also done in the code you referenced, from erikqu/EnhanceNet-PyTorch - train.py:
hr_imgs = torch.cat([discriminator(hr), discriminator(generated_hr.detach())], dim=0)

Tensorflow v1.10+ why is an input serving receiver function needed when checkpoints are made without it?

I'm in the process of adapting my model to TensorFlow's estimator API.
I recently asked a question regarding early stopping based on validation data where in addition to early stopping, the best model at this point should be exported.
It seems that my understanding of what a model export is and what a checkpoint is is not complete.
Checkpoints are made automatically. From my understanding, the checkpoints are sufficient for the estimator to start "warm" - either using so per-trained weights or weights prior to an error (e.g. if you experienced a power outage).
What is nice about checkpoints is that I do not have to write any code besides what is necessary for a custom estimator (namely, input_fn and model_fn).
While, given an initialized estimator, one can just call its train method to train the model, in practice this method is rather lackluster. Often one would like to do several things:
compare the network periodically to a validation dataset to ensure you are not over-fitting
stop the training early if over-fitting occurs
save the best model whenever the network finishes (either by hitting the specified number of training steps or by the early stopping criteria).
To someone new to the "high level" estimator API, a lot of low level expertise seems to be required (e.g. for the input_fn) as how one could get the estimator to do this is not straight forward.
By some light code reworking #1 can be achieved by using tf.estimator.TrainSpec and tf.estimator.EvalSpec with tf.estimator.train_and_evaluate.
In the previous question user #GPhilo clarifies how #2 can be achieved by using a semi-unintuitive function from the tf.contrib:
tf.contrib.estimator.stop_if_no_decrease_hook(my_estimator,'my_metric_to_monitor', 10000)
(unintuitive as "the early stopping is not triggered according to the number of non-improving evaluations, but to the number of non-improving evals in a certain step range").
#GPhilo - noting that it is unrelated to #2 - also answered how to do #3 (as requested in the original post). Yet, I do not understand what an input_serving_fn is, why it is needed, or how to make it.
This is further confusing to me as no such function is needed to make checkpoints, or for the estimator to start "warm" from the checkpoint.
So my questions are:
what is the difference between a checkpoint and an exported best model?
what exactly is a serving input receiver function and how to write one? (I have spent a bit of time reading over the tensorflow docs and do not find it sufficient to understand how I should write one, and why I even have to).
how can I train my estimator, save the best model, and then later load it.
To aid in answering my question I am providing this Colab document.
This self contained notebook produces some dummy data, saves it in TF Records, has a very simple custom estimator via model_fn and trains this model with an input_fn that uses the TF Record files. Thus it should be sufficient for someone to explain to me what placeholders I need to make for the input serving receiver function and and how I can accomplish #3.
Update
#GPhilo foremost I can not understate my appreciation for you thoughtful consideration and care in aiding me (and hopefully others) understand this matter.
My “goal” (motivating me to ask this question) is to try and build a reusable framework for training networks so I can just pass a different build_fn and go (plus have the quality of life features of exported model, early stopping, etc).
An updated (based off your answers) Colab can be found here.
After several readings of your answer, I have found now some more confusion:
1.
the way you provide input to the inference model is different than the one you use for the training
Why? To my understanding the data input pipeline is not:
load raw —> process —> feed to model
But rather:
Load raw —> pre process —> store (perhaps as tf records)
# data processing has nothing to do with feeding data to the model?
Load processed —> feed to model
In other words, it is my understanding (perhaps wrongly) that the point of a tf Example / SequenceExample is to store a complete singular datum entity ready to go - no other processing needed other than reading from the TFRecord file.

Thus there can be a difference between the training / evaluation input_fn and the inference one (e.g. reading from file vs eager / interactive evaluation of in memory), but the data format is the same (except for inference you might want to feed only 1 example rather than a batch…)
I agree that the “input pipeline is not part of the model itself”. However, in my mind, and I am apparently wrong in thinking so, with the estimator I should be able to feed it a batch for training and a single example (or batch) for inference.
An aside: “When evaluating, you don't need the gradients and you need a different input function. “, the only difference (at least in my case) is the files from which you reading?
I am familiar with that TF Guide, but I have not found it useful because it is unclear to me what placeholders I need to add and what additional ops needed to be added to convert the data.
What if I train my model with records and want to inference with just the dense tensors?
Tangentially, I find the example in the linked guide subpar, given the tf record interface requires the user to define multiple times how to write to / extract features from a tf record file in different contexts. Further, given that the TF team has explicitly stated they have little interest in documenting tf records, any documentation built on top of it, to me, is therefore equally unenlightening.
Regarding tf.estimator.export.build_raw_serving_input_receiver_fn.
What is the placeholder called? Input? Could you perhaps show the analog of tf.estimator.export.build_raw_serving_input_receiver_fn by writing the equivalent serving_input_receiver_fn
Regarding your example serving_input_receiver_fn with the input images. How do you know to call features ‘images’ and the receiver tensor ‘input_data’ ? Is that (the latter) standard?
How to name an export with signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY.
What is the difference between a checkpoint and an exported best model?
A checkpoint is, at its minimum, a file containing the values of all the variables of a specific graph taken at a specific time point.
By specific graph I mean that when loading back your checkpoint, what TensorFlow does is loop through all the variables defined in your graph (the one in the session you're running) and search for a variable in the checkpoint file that has the same name as the one in the graph. For resuming training, this is ideal because your graph will always look the same between restarts.
An exported model serves a different purpose. The idea of an exported model is that, once you're done training, you want to get something you can use for inference that doesn't contain all the (heavy) parts that are specific to training (some examples: gradient computation, global step variable, input pipeline, ...).
Moreover, and his is the key point, typically the way you provide input to the inference model is different than the one you use for the training. For training, you have an input pipeline that loads, preprocess and feeds data to your network. This input pipeline is not part of the model itself and may have to be altered for inference. This is a key point when operating with Estimators.
Why do I need a serving input receiver function?
To answer this I'll take first a step back. Why do we need input functions at all ad what are they? TF's Estimators, while perhaps not as intuitive as other ways to model networks, have a great advantage: they clearly separate between model logic and input processing logic by means of input functions and model functions.
A model lives in 3 different phases: Training, Evaluation and Inference. For the most common use-cases (or at least, all I can think of at the moment), the graph running in TF will be different in all these phases. The graph is the combination of input preprocessing, model and all the machinery necessary to run the model in the current phase.
A few examples to hopefully clarify further: When training, you need gradients to update the weights, an optimizer that runs the training step, metrics of all kinds to monitor how things are going, an input pipeline that grabs data from the training set, etc. When evaluating, you don't need the gradients and you need a different input function. When you are inferencing, all you need is the forward part of the model and again the input function will be different (no tf.data.* stuff but typically just a placeholder).
Each of these phases in Estimators has its own input function. You're familiar with the training and evaluation ones, the inference one is simply your serving input receiver function. In TF lingo, "serving" is the process of packing a trained model and using it for inference (there's a whole TensorFlow serving system for large-scale operation but that's beyond this question and you most likely won't need it anyhow).
Time to quote a TF guide on the topic:
During training, an input_fn() ingests data and prepares it for use by
the model. At serving time, similarly, a serving_input_receiver_fn()
accepts inference requests and prepares them for the model. This
function has the following purposes:
To add placeholders to the graph that the serving system will feed
with inference requests.
To add any additional ops needed to convert
data from the input format into the feature Tensors expected by the
model.
Now, the serving input function specification depends on how you plan of sending input to your graph.
If you're going to pack the data in a (serialized) tf.Example (which is similar to one of the records in your TFRecord files), your serving input function will have a string placeholder (that's for the serialized bytes for the example) and will need a specification of how to interpret the example in order to extract its data. If this is the way you want to go I invite you to have a look at the example in the linked guide above, it essentially shows how you setup the specification of how to interpret the example and parse it to obtain the input data.
If, instead, you're planning on directly feeding input to the first layer of your network you still need to define a serving input function, but this time it will only contain a placeholder that will be plugged directly into the network. TF offers a function that does just that: tf.estimator.export.build_raw_serving_input_receiver_fn.
So, do you actually need to write your own input function? IF al you need is a placeholder, no. Just use build_raw_serving_input_receiver_fn with the appropriate parameters. IF you need fancier preprocessing, then yes, you might need to write your own. In that case, it would look something like this:
def serving_input_receiver_fn():
"""For the sake of the example, let's assume your input to the network will be a 28x28 grayscale image that you'll then preprocess as needed"""
input_images = tf.placeholder(dtype=tf.uint8,
shape=[None, 28, 28, 1],
name='input_images')
# here you do all the operations you need on the images before they can be fed to the net (e.g., normalizing, reshaping, etc). Let's assume "images" is the resulting tensor.
features = {'input_data' : images} # this is the dict that is then passed as "features" parameter to your model_fn
receiver_tensors = {'input_data': input_images} # As far as I understand this is needed to map the input to a name you can retrieve later
return tf.estimator.export.ServingInputReceiver(features, receiver_tensors)
How can I train my estimator, save the best model, and then later load it?
Your model_fn takes the mode parameter in order for you to build conditionally the model. In your colab, you always have a optimizer, for example. This is wrong ,as it should only be there for mode == tf.estimator.ModeKeys.TRAIN.
Secondly, your build_fn has an "outputs" parameter that is meaningless. This function should represent your inference graph, take as input only the tensors you'll fed to it in the inference and return the logits/predictions.
I'll thus assume the outputs parameters is not there as the build_fn signature should be def build_fn(inputs, params).
Moreover, you define your model_fn to take features as a tensor. While this can be done, it both limits you to having exactly one input and complicates things for the serving_fn (you can't use the canned build_raw_... but need to write your own and return a TensorServingInputReceiver instead). I'll choose the more generic solution and assume your model_fn is as follows (I omit the variable scope for brevity, add it as necessary):
def model_fn(features, labels, mode, params):
my_input = features["input_data"]
my_input.set_shape(I_SHAPE(params['batch_size']))
# output of the network
onet = build_fn(features, params)
predicted_labels = tf.nn.sigmoid(onet)
predictions = {'labels': predicted_labels, 'logits': onet}
export_outputs = { # see EstimatorSpec's docs to understand what this is and why it's necessary.
'labels': tf.estimator.export.PredictOutput(predicted_labels),
'logits': tf.estimator.export.PredictOutput(onet)
}
# NOTE: export_outputs can also be used to save models as "SavedModel"s during evaluation.
# HERE is where the common part of the graph between training, inference and evaluation stops.
if mode == tf.estimator.ModeKeys.PREDICT:
# return early and avoid adding the rest of the graph that has nothing to do with inference.
return tf.estimator.EstimatorSpec(mode=mode,
predictions=predictions,
export_outputs=export_outputs)
labels.set_shape(O_SHAPE(params['batch_size']))
# calculate loss
loss = loss_fn(onet, labels)
# add optimizer only if we're training
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.AdagradOptimizer(learning_rate=params['learning_rate'])
# some metrics used both in training and eval
mae = tf.metrics.mean_absolute_error(labels=labels, predictions=predicted_labels, name='mea_op')
mse = tf.metrics.mean_squared_error(labels=labels, predictions=predicted_labels, name='mse_op')
metrics = {'mae': mae, 'mse': mse}
tf.summary.scalar('mae', mae[1])
tf.summary.scalar('mse', mse[1])
if mode == tf.estimator.ModeKeys.EVAL:
return tf.estimator.EstimatorSpec(mode, loss=loss, eval_metric_ops=metrics, predictions=predictions, export_outputs=export_outputs)
if mode == tf.estimator.ModeKeys.TRAIN:
train_op = optimizer.minimize(loss, global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode, loss=loss, train_op=train_op, eval_metric_ops=metrics, predictions=predictions, export_outputs=export_outputs)
Now, to set up the exporting part, after your call to train_and_evaluate finished:
1) Define your serving input function:
serving_fn = tf.estimator.export.build_raw_serving_input_receiver_fn(
{'input_data':tf.placeholder(tf.float32, [None,#YOUR_INPUT_SHAPE_HERE (without batch size)#])})
2) Export the model to some folder
est.export_savedmodel('my_directory_for_saved_models', serving_fn)
This will save the current state of the estimator to wherever you specified. If you want a specifc checkpoint, load it before calling export_savedmodel.
This will save in "my_directory_for_saved_models" a prediction graph with the trained parameters that the estimator had when you called the export function.
Finally, you might want t freeze the graph (look up freeze_graph.py) and optimize it for inference (look up optimize_for_inference.py and/or transform_graph) obtaining a frozen *.pb file you can then load and use for inference as you wish.
Edit: Adding answers to the new questions in the update
Sidenote:
My “goal” (motivating me to ask this question) is to try and build a
reusable framework for training networks so I can just pass a
different build_fn and go (plus have the quality of life features of
exported model, early stopping, etc).
By all means, if you manage, please post it on GitHub somewhere and link it to me. I've been trying to get just the same thing up and running for a while now and the results are not quite as good as I'd like them to be.
Question 1:
In other words, it is my understanding (perhaps wrongly) that the
point of a tf Example / SequenceExample is to store a complete
singular datum entity ready to go - no other processing needed other
than reading from the TFRecord file.
Actually, this is typically not the case (although, your way is in theory perfectly fine too).
You can see TFRecords as a (awfully documented) way to store a dataset in a compact way. For image datasets for example, a record typically contains the compressed image data (as in, the bytes composing a jpeg/png file), its label and some meta information. Then the input pipeline reads a record, decodes it, preprocesses it as needed and feeds it to the network. Of course, you can move the decoding and preprocessing before the generation of the TFRecord dataset and store in the examples the ready-to-feed data, but the size blowup of your dataset will be huge.
The specific preprocessing pipeline is one example what changes between phases (for example, you might have data augmentation in the training pipeline, but not in the others). Of course, there are cases in which these pipelines are the same, but in general this is not true.
About the aside:
“When evaluating, you don't need the gradients and you need a
different input function. “, the only difference (at least in my case)
is the files from which you reading?
In your case that may be. But again, assume you're using data augmentation: You need to disable it (or, better, don't have it at all) during eval and this alters your pipeline.
Question 2: What if I train my model with records and want to inference with just the dense tensors?
This is precisely why you separate the pipeline from the model.
The model takes as input a tensor and operates on it. Whether that tensor is a placeholder or is the output of a subgraph that converts it from an Example to a tensor, that's a detail that belongs to the framework, not to the model itself.
The splitting point is the model input. The model expects a tensor (or, in the more generic case, a dict of name:tensor items) as input and uses that to build its computation graph. Where that input comes from is decided by the input functions, but as long as the output of all input functions has the same interface, one can swap inputs as needed and the model will simply take whatever it gets and use it.
So, to recap, assuming you train/eval with Examples and predict with dense tensors, your train and eval input functions will set up a pipeline that reads examples from somewhere, decodes them into tensors and returns those to the model to use as inputs. Your predict input function, on the other hand, just sets up one placeholder per input of your model and returns them to the model, because it assumes you'll put in the placeholders the data ready to be fed to the network.
Question 3:
You pass the placeholder as a parameter of build_raw_serving_input_receiver_fn, so you choose its name:
tf.estimator.export.build_raw_serving_input_receiver_fn(
{'images':tf.placeholder(tf.float32, [None,28,28,1], name='input_images')})
Question 4:
There was a mistake in the code (I had mixed up two lines), the dict's key should have been input_data (I amended the code above).
The key in the dict has to be the key you use to retrieve the tensor from features in your model_fn. In model_fn the first line is:
my_input = features["input_data"]
hence the key is 'input_data'.
As per the key in receiver_tensor, I'm still not quite sure what role that one has, so my suggestion is try setting a different name than the key in features and check where the name shows up.
Question 5:
I'm not sure I understand, I'll edit this after some clarification

Training in Keras with external evaluation function

Let me first describe the setup:
We have an autonomous agent in Unity, whose decisions are based on the perceived environment(level) and some pre-defined parameters for value mapping. Our aim is to pre-train the agents' parameters in a DNN. So the idea is basically to define an error metric to evaluate the performance of the agent in a Unity simulation (run the level, e.g. measure the deviation from the optimal trajectory = ground truth in unity). So based on the input level into the DNN, the network should train to output the params, the simulation is performed and the error is passed back to the network as the error value, like accuracy, so the network could train based on that error/performance.
Is there any way to perform the evaluation(comparison to the ground truth) during the training outside of Keras? Usually, one passes X data to the network, train stuff and compare it to the ground truth Y. This works fine for predictions, but I don't want to predict something. What I do want is to measure the deviation from the ground truth inside the simulation.
I know there is Unity ML Agents, but as far as I could read, the 'brain' controls the agent on runtime, i.e. update it on every frame and control the movement. What I want is to perform the whole simulation to update the params/weights of the network.
Best wishes.
After some talks at my university:
the setup won't work this way since I need to split the process.
I need the parameters of working agents to train the network based only on the level description(e.g. matrix like video game description language). To obtain the parametrized agents based on the actual level and the ground truth data(e.g. deviation from trajectory), one need to use reinforcement deep learning with a score function to obtain these parameters. Therefore Unity ML Agents might be useful. Afterwards, I can use the parameters settings and the correlating level data to train a network to yield the desired parameters based only on the level description.

Tensorflow: how do I extract/export variable values at every iteration of training?

I have been playing around with some neural networks on Tensorflow and I wanted to make a visualization of the neural network's learning process.
To do so, I intend to extract the following variables into text/JSON/csv: pre-activation result before 1st layer, activation, bias and weight values for testing and training, each layer and for all time steps. I am looking for a generalizable solution so that I don't have to modify my source code (or at least not more than one or two lines) when applying visualization to future networks. Ideally I could run some function from another python program to read any python/TF code and extract the variables described above. So far I have considered the following solutions:
1) use tf.summary and the filewriter to save as a serialized protocol buffer, then find a way to go from protocol buffer --> JSON format. This unfortunately would not fit the bill as it requires me to modify too much inner code.
2) Perhaps using https://www.tensorflow.org/api_docs/python/tf/train/export_meta_graph
Although I am not sure how to implement given my TF foundations are not quite there yet
3) I have also found this solution:
W_val, b_val= sess.run([W, b])
np.savetxt("W1.csv", W_val, delimiter=",")
np.savetxt("b1.csv", b_val, delimiter=",")
But the problem is that it only saves the final values of the weights and biases, whereas I am looking to save their values at all timesteps of training.
If anyone has any suggestions on how to tackle this problem or any guidance I would appreciate it.
Many thanks
for step in range(num_train_steps):
_, weight_values, bias_values = sess.run([your_train_op, weight, bias])
# save weight_values and bias_values
Doing it with tf.Summaries is probably a good idea. You could then visualize it all in Tesnorboard, much like with some of the tutorials and the inception retraining code.
Alternatively you could perform fetches within your sess.run() call to grab whatever tensors you like at every step (i.e. every run call).
I have pasted a response to a similar question regarding extracting the cross entropy from another question below:
When you do your session run call (e.g. res = sess.run(...) ) then you can put in a fetch for your cross entropy variable.
For example, let's say you have a complicated sess.run() call that gets some predictions but you also want to your cross entropy then you may have code that looks like this:
feeds={x_data:x,y_data:y}
fetches=[y_result,cross_entropy]
res=sess.run(fetches=fetches, feed_dict=feeds) predictions=res[0]
#your first fetch parameter xent=res[1] #Your second fetch parameter.
Fetches within the run call allows you to "fetch" tensors from your graph.
You should be able to do the above but instead of cross entropy, just a list of whatever you want. I use it to fetch both my summaries and also intermediate accuracy values.

Categories