tensorflow evaluate with confusion matrix - python

In the tensorflow CNN tutorial, it computes the accuracy, but I want to leverage from that to the confusion matrix.
Immediately, three different approaches hit on my mind:
I tried to directly compute the prediction result instead of top_k_op in tensorflow, then I could utilize sklearn. But I failed, because the it used multiple threads to compute(line 88);
I tried to load the trained Variables and give new placeholder to cifar10.inference, but failed again, because it defined batch_image as input(line 225);
The last approach is to defined a new operation to replace the line 128
top_k_op = tf.nn.in_top_k(logits, labels, 1)
but I could not find a proper operations could do that.
This has afflicted me for several days. Please help. Thank you in advance.

You can utilize sklearn's confusion_matrix only after running 'inference' on all the dataset.
Meaning, if you are modifying eval_only function, you should just accumulate all the scores into some thread-safe container (list). And then after all threads are stopped (line 113) you can run single confusion matrix computation.
Additionally, if you want to do it in the graph, TensorFlow recently got confusion_matrix op you can try using. That said, it only works on the batch so you will need to increase your batch to get any kind of resolution or write a custom aggregator.

Related

Keras Tensorflow Aggregate Metrics Using Addition Instead of Mean

I know that Tensorflow/Keras provides stateful metrics which can be updated using metric.update_state(). I understand this updating of metrics in a stateful manner is performed via taking average/mean using MeanMetricWrapper class.
What should I do in case I would like to update the metrics using another operation for example, addition (let's say I would like to accumulate loss across all batches instead of taking the average, so I can print the loss across entire epoch instead of per-batch average).
I am more interested in solutions that can work seamlessly with model.train_on_batch(). Thank you.

Why is my model giving different result each time I train it?

My question is why when I train the same algorithm twice, it gives different results each time I train it?
Is it normal or there might be a problem in data or code?
The algorithm is deep deterministic policy gradient.
It's absolutely normal. There is no problem with either data or code.
The algorithm may be initialized to a random state, such as the initial weights in an artificial neural network.
Try setting numpy seed for result reproducibility as below:
import numpy as np
np.random.seed(42)
Learn more about this from here.
When you initialize weights to your model, they are often initialized randomly by whatever you use, most likely np.random.rand(), and therefore yields different results every time.
If you do not want to randomize weights, use np.random.seed(10) to always get the same results. If using any other library, I'm sure there are equal commands.
Edit: i saw you used tensorflow, in that case:
tf.random.set_random_seed(10)

Predict() on Keras gives alway different results even if the NN and the dataset is the same

I have my model and a fixed dataset on which I do the train_test_split twice: once for getting train and test sets and the second time for getting a validation set too.
I have to reuse the same network, on the same data, twice in two different modules but every time I do that I get different results.
Is there a way to fix it?
I have the weights fixed and random_state = 42 so to eliminate every form of randomness but still it does not seem enough.
The optimizer I used is Adam and the loss function is the mean absolute error.
Do you train and evaluate (predict) the model in the same script and process?
Please check the official guide how to obtain reproducible results using keras during development.
In addition you can try to save and load your model (in another file) to check the predictions.

TensorFlow Custom Estimator predict throwing value error

Note: this question has an accompanying, documented Colab notebook.
TensorFlow's documentation can, at times, leave a lot to be desired. Some of the older docs for lower level apis seem to have been expunged, and most newer documents point towards using higher level apis such as TensorFlow's subset of keras or estimators. This would not be so problematic if the higher level apis did not so often rely closely on their lower levels. Case in point, estimators (especially the input_fn when using TensorFlow Records).
Over the following Stack Overflow posts:
Tensorflow v1.10: store images as byte strings or per channel?
Tensorflow 1.10 TFRecordDataset - recovering TFRecords
Tensorflow v1.10+ why is an input serving receiver function needed when checkpoints are made without it?
TensorFlow 1.10+ custom estimator early stopping with train_and_evaluate
TensorFlow custom estimator stuck when calling evaluate after training
and with the gracious assistance of the TensorFlow / StackOverflow community, we have moved closer to doing what the TensorFlow "Creating Custom Estimators" guide has not, demonstrating how to make an estimator one might actually use in practice (rather than toy example) e.g. one which:
has a validation set for early stopping if performance worsen,
reads from TF Records because many datasets are larger than the TensorFlow recommend 1Gb for in memory, and
that saves its best version whilst training
While I still have many questions regarding this (from the best way to encode data into a TF Record, to what exactly the serving_input_fn expects), there is one question that stands out more prominently than the rest:
How to predict with the custom estimator we just made?
Under the documentation for predict, it states:
input_fn: A function that constructs the features. Prediction continues until input_fn raises an end-of-input exception (tf.errors.OutOfRangeError or StopIteration). See Premade Estimators for more information. The function should construct and return one of the following:
A tf.data.Dataset object: Outputs of Dataset object must have same constraints as below.
features: A tf.Tensor or a dictionary of string feature name to Tensor. features are consumed by model_fn. They should satisfy the expectation of model_fn from inputs.
A tuple, in which case the first item is extracted as features.
(perhaps) Most likely, if one is using estimator.predict, they are using data in memory such as a dense tensor (because a held out test set would likely go through evaluate).
So I, in the accompanying Colab, create a single dense example, wrap it up in a tf.data.Dataset, and call predict to get a ValueError.
I would greatly appreciate it if someone could explain to me how I can:
load my saved estimator
given a dense, in memory example, predict the output with the estimator
to_predict = random_onehot((1, SEQUENCE_LENGTH, SEQUENCE_CHANNELS))\
.astype(tf_type_string(I_DTYPE))
pred_features = {'input_tensors': to_predict}
pred_ds = tf.data.Dataset.from_tensor_slices(pred_features)
predicted = est.predict(lambda: pred_ds, yield_single_examples=True)
next(predicted)
ValueError: Tensor("IteratorV2:0", shape=(), dtype=resource) must be from the same graph as Tensor("TensorSliceDataset:0", shape=(), dtype=variant).
When you use the tf.data.Dataset module, it actually defines an input graph which is independant from the model graph. What happens here is that you first created a small graph by calling tf.data.Dataset.from_tensor_slices(), then the estimator API created a second graph by calling dataset.make_one_shot_iterator() automatically. These 2 graphs can't communicate so it throws an error.
To circumvent this, you should never create a dataset outside of estimator.train/evaluate/predict. This is why everything data related is wrapped inside input functions.
def predict_input_fn(data, batch_size=1):
dataset = tf.data.Dataset.from_tensor_slices(data)
return dataset.batch(batch_size).prefetch(None)
predicted = est.predict(lambda: predict_input_fn(pred_features), yield_single_examples=True)
next(predicted)
Now, the graph is not created outside of the predict call.
I also added dataset.batch() because the rest of your code expect batched data and it was throwing a shape error. Prefetch just speed things up.

Tensorflow: how do I extract/export variable values at every iteration of training?

I have been playing around with some neural networks on Tensorflow and I wanted to make a visualization of the neural network's learning process.
To do so, I intend to extract the following variables into text/JSON/csv: pre-activation result before 1st layer, activation, bias and weight values for testing and training, each layer and for all time steps. I am looking for a generalizable solution so that I don't have to modify my source code (or at least not more than one or two lines) when applying visualization to future networks. Ideally I could run some function from another python program to read any python/TF code and extract the variables described above. So far I have considered the following solutions:
1) use tf.summary and the filewriter to save as a serialized protocol buffer, then find a way to go from protocol buffer --> JSON format. This unfortunately would not fit the bill as it requires me to modify too much inner code.
2) Perhaps using https://www.tensorflow.org/api_docs/python/tf/train/export_meta_graph
Although I am not sure how to implement given my TF foundations are not quite there yet
3) I have also found this solution:
W_val, b_val= sess.run([W, b])
np.savetxt("W1.csv", W_val, delimiter=",")
np.savetxt("b1.csv", b_val, delimiter=",")
But the problem is that it only saves the final values of the weights and biases, whereas I am looking to save their values at all timesteps of training.
If anyone has any suggestions on how to tackle this problem or any guidance I would appreciate it.
Many thanks
for step in range(num_train_steps):
_, weight_values, bias_values = sess.run([your_train_op, weight, bias])
# save weight_values and bias_values
Doing it with tf.Summaries is probably a good idea. You could then visualize it all in Tesnorboard, much like with some of the tutorials and the inception retraining code.
Alternatively you could perform fetches within your sess.run() call to grab whatever tensors you like at every step (i.e. every run call).
I have pasted a response to a similar question regarding extracting the cross entropy from another question below:
When you do your session run call (e.g. res = sess.run(...) ) then you can put in a fetch for your cross entropy variable.
For example, let's say you have a complicated sess.run() call that gets some predictions but you also want to your cross entropy then you may have code that looks like this:
feeds={x_data:x,y_data:y}
fetches=[y_result,cross_entropy]
res=sess.run(fetches=fetches, feed_dict=feeds) predictions=res[0]
#your first fetch parameter xent=res[1] #Your second fetch parameter.
Fetches within the run call allows you to "fetch" tensors from your graph.
You should be able to do the above but instead of cross entropy, just a list of whatever you want. I use it to fetch both my summaries and also intermediate accuracy values.

Categories