Tensorflow lstm incremental learning and multiple predictions - python

I am training a tensorflow model and later plan to use it for predictions.
import numpy as np
import pandas as pd
import sys
import tensorflow as tf
from tensorflow.contrib import learn
from sklearn.metrics import mean_squared_error, mean_absolute_error
from lstm_predictor import load_csvdata, lstm_model
import pymysql as mariadb
LOG_DIR = './ops_logs'
K = 1 # history used for lstm.
TIMESTEPS = 65*K
RNN_LAYERS = [{'steps': TIMESTEPS}]
DENSE_LAYERS = [10, 10]
TRAINING_STEPS = 1000
BATCH_SIZE = 1
PRINT_STEPS = TRAINING_STEPS / 10
def train_model(symbol=1,categ='M1',limit=1000,upgrade=False):
MODEL_DIR = 'model/'+str(symbol)+categ
regressor = learn.TensorFlowEstimator(model_fn=lstm_model(TIMESTEPS, RNN_LAYERS, DENSE_LAYERS),
n_classes=0,
verbose=1,
steps=TRAINING_STEPS,
optimizer='Adagrad',
learning_rate=0.03,
continue_training=True,
batch_size=BATCH_SIZE )
X, y = load_csvdata(df, K )
regressor.fit(X['train'], y['train'] , logdir=MODEL_DIR ) #logdir=LOG_DIR)
X['test']=X['train'][-10:]
y['test']=y['train'][-10:]
predicted = regressor.predict(X['test'])
print('actual', 'predictions')
for i,yi in enumerate(y['test']):
print(yi[0], ' ' ,predicted[i])
mse = mean_absolute_error(y['test'], predicted)
print ("mean_absolute_error : %f" % mse)
###############################
regressor.save( LOG_DIR )
train_model()
Then I want to write a predict function which would read the model from model/** and make predictions.
def predict(symbol=1,categ='M1'):
pass
# how to load saved model data ?
But I am unable to load the model using
regressor = learn.TensorFlowEstimator.restore( LOG_DIR )
Since its currently not implemented.
Suggest me how can I do repeated predictions at multiple times in future?
The model checkpoints are saved as:
checkpoint model.ckpt-8001.meta
events.out.tfevents.1476102309.hera.creatory.org model.ckpt-8301-00000-of-00001
events.out.tfevents.1476102926.hera.creatory.org model.ckpt-8301.meta
events.out.tfevents.1476105626.hera.creatory.org model.ckpt-8601-00000-of-00001
events.out.tfevents.1476106521.hera.creatory.org model.ckpt-8601.meta
events.out.tfevents.1476106839.hera.creatory.org model.ckpt-8901-00000-of-00001
events.out.tfevents.1476107001.hera.creatory.org model.ckpt-8901.meta
events.out.tfevents.1476107462.hera.creatory.org model.ckpt-9000-00000-of-00001
graph.pbtxt model.ckpt-9000.meta
model.ckpt-8001-00000-of-00001

Related

Python keras sequential model predicts the same value (y_train average) for all inputs

I'm trying to build a sequential neural network with keras. I generate a dataset with inserting randoms in a known function and train my model with this dataset, long enough to get a steady loss. Then I ask the model to predict the x_train values, but instead of predicting something close to y_train, it returns the same value regardless of the input x. This value also happens to be the average of y_train values. I don't understand what I'm doing wrong and why this is happening.
I'm using the following function for training the model:
def train_model(x_train,y_train,batch_size,input_size,layer_sizes,activations,optimizer,epochs,loss='MeanSquaredError'):
assert len(layer_sizes) == len(activations)
n_layers=len(layer_sizes)
model = Sequential()
model.add(LayerNormalization(input_dim=input_size))
model.add(Dense(layer_sizes[0],kernel_regularizer='l2',kernel_initializer='ones',activation=activations[0],input_dim=input_size,name='layer1'))
for i in range(1,n_layers):
model.add(Dense(layer_sizes[i],kernel_initializer='ones',activation=activations[i],name=f'layer{i+1}'))
model.compile(
optimizer = optimizer,
loss = loss, #MeanSquaredLogarithmicError
)
print(model.summary())
history = model.fit(x_train,y_train,batch_size=batch_size,epochs=epochs)
loss_history = history.history['loss']
plt.scatter(x=np.arange(1,epochs+1),y=loss_history)
plt.show()
return model
I then created an arbitrary function (just for test purposes) as:
def func(x1,x2,x3,x4):
y=(x1**3+(x2*x3+2))/(x4+x2*x1)
return y
and made a random dataset with this function:
def random_points_in_range(n,ranges):
points = np.empty((n,len(ranges)))
for i,element in enumerate(ranges):
start=min(element[1],element[0])
interval=abs(element[1]-element[0])
rand_check = np.random.rand(n)
randoms = ( rand_check*interval ) + start
points[:,i] = randoms.T
return points
def generate_random_dataset(n=200,ranges=[(0,10),(0,10),(0,10),(0,10)]):
x_dataset = random_points_in_range(n,ranges)
y_dataset = np.empty(n)
for i in range(n):
x1,x2,x3,x4 = x_dataset[i]
y_dataset[i] = func(x1,x2,x3,x4)
return x_dataset,y_dataset
I then train a model with these functions:
x_train,y_train = generate_random_dataset()
layer_sizes = [6,8,10,10,1]
activations = [LeakyReLU(),'relu','swish','relu','linear']
opt = Adam(learning_rate=0.001)
epochs = 3000
model=train_model(x_train,y_train,5,4,layer_sizes,activations,opt,epochs,loss='MeanSquaredError')
if you want to run the code these are things you need to import:
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import train_test_split
import random
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LayerNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import regularizers

federated learning implementing [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 2 years ago.
Improve this question
I new in python and machine learning. I tried to implement the following code for federated learning with the MNIST dataset but it doesn't work !! it tried to train a model in a distributed way in local workers. the jpeg version of the MNIST data set is using here. It consists of 42000 digit images with each class kept in a separate folder. I will load the data into memory using this code snippet and keep 10% of the data for testing the trained global model later on.
The following error appears when i implement the following fl_implemetation.py
(base) C:\python1>fl_implemetation.py
File "C:\python1\fl_implemetation.py", line 112
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)SGD_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(y_train)).batch(320)
^
SyntaxError: invalid syntax
there are two python files, first **fl_implemetation.py**.
The original code I am using can be found here:
https://github.com/datafrick/tutorial
import NumPy as np
import random
import cv2
import os
from imutils import paths
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score
import TensorFlow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras import backend as K
from fl_mnist_implementation_tutorial_utils import *
#declear path to your mnist data folder
img_path = '/path/to/your/training/dataset'
#get the path list using the path object
image_paths = list(paths.list_images(img_path))
#apply our function
image_list, label_list = load(image_paths, verbose=10000)
#binarize the labels
lb = LabelBinarizer()
label_list = lb.fit_transform(label_list)
#split data into training and test set
X_train, X_test, y_train, y_test = train_test_split(image_list,
label_list,
test_size=0.1,
random_state=42)
#create clients
clients = create_clients(X_train, y_train, num_clients=10, initial='client')
#process and batch the training data for each client
clients_batched = dict()
for (client_name, data) in clients.items():
clients_batched[client_name] = batch_data(data)
#process and batch the test set
test_batched = tf.data.Dataset.from_tensor_slices((X_test, y_test)).batch(len(y_test))
comms_round = 100
#create optimizer
lr = 0.01
loss='categorical_crossentropy'
metrics = ['accuracy']
optimizer = SGD(lr=lr,
decay=lr / comms_round,
momentum=0.9
)
#initialize global model
smlp_global = SimpleMLP()
global_model = smlp_global.build(784, 10)
#commence global training loop
for comm_round in range(comms_round):
# get the global model's weights - will serve as the initial weights for all local models
global_weights = global_model.get_weights()
#initial list to collect local model weights after scalling
scaled_local_weight_list = list()
#randomize client data - using keys
client_names= list(clients_batched.keys())
random.shuffle(client_names)
#loop through each client and create new local model
for client in client_names:
smlp_local = SimpleMLP()
local_model = smlp_local.build(784, 10)
local_model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
#set local model weight to the weight of the global model
local_model.set_weights(global_weights)
#fit local model with client's data
local_model.fit(clients_batched[client], epochs=1, verbose=0)
#scale the model weights and add to list
scaling_factor = weight_scalling_factor(clients_batched, client)
scaled_weights = scale_model_weights(local_model.get_weights(), scaling_factor)
scaled_local_weight_list.append(scaled_weights)
#clear session to free memory after each communication round
K.clear_session()
#to get the average over all the local model, we simply take the sum of the scaled weights
average_weights = sum_scaled_weights(scaled_local_weight_list)
#update global model
global_model.set_weights(average_weights)
#test global model and print out metrics after each communications round
for(X_test, Y_test) in test_batched:
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)SGD_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(y_train)).batch(320)
smlp_SGD = SimpleMLP()
SGD_model = smlp_SGD.build(784, 10)
SGD_model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
# fit the SGD training data to model
_ = SGD_model.fit(SGD_dataset, epochs=100, verbose=0)
#test the SGD global model and print out metrics
for(X_test, Y_test) in test_batched:
SGD_acc, SGD_loss = test_model(X_test, Y_test, SGD_model, 1)
and second fl_mnist_implementation_tutorial_utils.py
import NumPy as np
import random
import cv2
import os
from imutils import paths
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score
import TensorFlow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import SGD
from tensorflow.keras import backend as K
def load(paths, verbose=-1):
'''expects images for each class in separate dir,
e.g all digits in 0 class in the directory named 0 '''
data = list()
labels = list()
# loop over the input images
for (i, imgpath) in enumerate(paths):
# load the image and extract the class labels
im_gray = cv2.imread(imgpath, cv2.IMREAD_GRAYSCALE)
image = np.array(im_gray).flatten()
label = imgpath.split(os.path.sep)[-2]
# scale the image to [0, 1] and add to list
data.append(image/255)
labels.append(label)
# show an update every `verbose` images
if verbose > 0 and i > 0 and (i + 1) % verbose == 0:
print("[INFO] processed {}/{}".format(i + 1, len(paths)))
# return a tuple of the data and labels
return data, labels
def create_clients(image_list, label_list, num_clients=10, initial='clients'):
''' return: a dictionary with keys clients' names and value as
data shards - tuple of images and label lists.
args:
image_list: a list of numpy arrays of training images
label_list:a list of binarized labels for each image
num_client: number of fedrated members (clients)
initials: the clients'name prefix, e.g, clients_1
'''
#create a list of client names
client_names = ['{}_{}'.format(initial, i+1) for i in range(num_clients)]
#randomize the data
data = list(zip(image_list, label_list))
random.shuffle(data)
#shard data and place at each client
size = len(data)//num_clients
shards = [data[i:i + size] for i in range(0, size*num_clients, size)]
#number of clients must equal number of shards
assert(len(shards) == len(client_names))
return {client_names[i] : shards[i] for i in range(len(client_names))}
def batch_data(data_shard, bs=32):
'''Takes in a clients data shard and create a tfds object off it
args:
shard: a data, label constituting a client's data shard
bs:batch size
return:
tfds object'''
#seperate shard into data and labels lists
data, label = zip(*data_shard)
dataset = tf.data.Dataset.from_tensor_slices((list(data), list(label)))
return dataset.shuffle(len(label)).batch(bs)
class SimpleMLP:
#staticmethod
def build(shape, classes):
model = Sequential()
model.add(Dense(200, input_shape=(shape,)))
model.add(Activation("relu"))
model.add(Dense(200))
model.add(Activation("relu"))
model.add(Dense(classes))
model.add(Activation("softmax"))
return model
def weight_scalling_factor(clients_trn_data, client_name):
client_names = list(clients_trn_data.keys())
#get the bs
bs = list(clients_trn_data[client_name])[0][0].shape[0]
#first calculate the total training data points across clinets
global_count = sum([tf.data.experimental.cardinality(clients_trn_data[client_name]).numpy() for client_name in client_names])*bs
# get the total number of data points held by a client
local_count = tf.data.experimental.cardinality(clients_trn_data[client_name]).numpy()*bs
return local_count/global_count
def scale_model_weights(weight, scalar):
'''function for scaling a models weights'''
weight_final = []
steps = len(weight)
for i in range(steps):
weight_final.append(scalar * weight[i])
return weight_final
def sum_scaled_weights(scaled_weight_list):
'''Return the sum of the listed scaled weights. The is equivalent to scaled avg of the weights'''
avg_grad = list()
#get the average grad accross all client gradients
for grad_list_tuple in zip(*scaled_weight_list):
layer_mean = tf.math.reduce_sum(grad_list_tuple, axis=0)
avg_grad.append(layer_mean)
return avg_grad
def test_model(X_test, Y_test, model, comm_round):
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
#logits = model.predict(X_test, batch_size=100)
logits = model.predict(X_test)
loss = cce(Y_test, logits)
acc = accuracy_score(tf.argmax(logits, axis=1), tf.argmax(Y_test, axis=1))
print('comm_round: {} | global_acc: {:.3%} | global_loss: {}'.format(comm_round, acc, loss))
return acc, loss
You forgot to add \n in this line:
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)SGD_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(y_train)).batch(320)
So, this line should be two lines like so:
global_acc, global_loss = test_model(X_test, Y_test, global_model, comm_round)
SGD_dataset = tf.data.Dataset.from_tensor_slices((X_train, y_train)).shuffle(len(y_train)).batch(320)

Bad accuracy when prediction happens

After I trained my model for the toxic challenge at Keras the accuracy of the prediction is bad. I'm not sure if I'm doing something wrong, but the accuracy during the training period was pretty good ~0.98.
How I trained
import sys, os, re, csv, codecs, numpy as np, pandas as pd
import matplotlib.pyplot as plt
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
train = pd.read_csv('train.csv')
list_classes = ["toxic", "severe_toxic", "obscene", "threat", "insult", "identity_hate"]
y = train[list_classes].values
list_sentences_train = train["comment_text"]
max_features = 20000
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(list_sentences_train))
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
maxlen = 200
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
inp = Input(shape=(maxlen, ))
embed_size = 128
x = Embedding(max_features, embed_size)(inp)
x = LSTM(60, return_sequences=True,name='lstm_layer')(x)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(6, activation="sigmoid")(x)
model = Model(inputs=inp, outputs=x)
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
batch_size = 32
epochs = 2
print(X_t[0])
model.fit(X_t,y, batch_size=batch_size, epochs=epochs, validation_split=0.1)
model.save("m.hdf5")
This is how I predict
model = load_model('m.hdf5')
list_sentences_train = np.array(["I love you Stackoverflow"])
max_features = 20000
tokenizer = Tokenizer(num_words=max_features)
tokenizer.fit_on_texts(list(list_sentences_train))
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
maxlen = 200
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
print(X_t)
print(model.predict(X_t))
Output
[[ 1.97086316e-02 9.36032447e-05 3.93966911e-03 5.16672269e-04
3.67353857e-03 1.28102733e-03]]
In inference (i.e. prediction) phase, you should use the same pre-processing steps you have used during training of the model. Therefore, you should not create a new Tokenizer instance and fit it on your test data. Rather, if you want to be able to do prediction later with the same model, besides the model you must also save all the statistics you obtained from the training data like the vocabulary in Tokenizer instance. Therefore it would be like this:
import pickle
# building and training of the model as you have done ...
# store all the data we need later: model and tokenizer
model.save("m.hdf5")
with open('tokenizer.pkl', 'wb') as handler:
pickle.dump(tokenizer, handler)
And now in prediction phase:
import pickle
model = load_model('m.hdf5')
with open('tokenizer.pkl', 'rb') as handler:
tokenizer = pickle.load(handler)
list_sentences_train = ["I love you Stackoverflow"]
# use the the same tokenizer instance you used in training phase
list_tokenized_train = tokenizer.texts_to_sequences(list_sentences_train)
maxlen = 200
X_t = pad_sequences(list_tokenized_train, maxlen=maxlen)
print(model.predict(X_t))

Fit method of my model does not perform training after being loaded with load_model

I have trained a NN using tf.keras and saved the whole model with ModelCheckpoint in a .h5 file.
However, when I restore it with models.load_model and then train it again with the method fit, it only returns a History object and does nothing more.
Below is the minimal example for the training:
import numpy as np
import tensorflow as tf
# Creates dummy data
train_x = np.random.randint(10,size=40).reshape(-1,1)
train_y = np.random.randint(2,size=40).reshape(-1,1)
train_set = (train_x,train_y)
val_x = np.random.randint(10,size=20).reshape(-1,1)
val_y = np.random.randint(2,size=20).reshape(-1,1)
val_set = (val_x,val_y)
# Set Learning Rate Decay
import math
def step_decay(epoch):
print('--- Epoch:',epoch)
print(tf.keras.callbacks.History())
init_lr = 0.001
drop = 0.9
epochs_drop = 1.0
lr = init_lr*math.pow(drop,math.floor((1+epoch)/epochs_drop))
return(lr)
lr_callback = tf.keras.callbacks.LearningRateScheduler(step_decay)
# Saves the whole model
cp_callback = tf.keras.callbacks.ModelCheckpoint('model.h5',
save_weights_only=False,
verbose=True)
# Creates the model
model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(1,activation='relu',use_bias=False,input_dim=(1)))
model.add(tf.keras.layers.Dense(100,activation='relu',use_bias=False))
model.add(tf.keras.layers.Dense(1,activation='relu',use_bias=False))
model.compile(loss='mean_squared_error',
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
# Train the model
model.fit(x=train_set[0],y=train_set[1],epochs=2,steps_per_epoch=40,
validation_data=val_set,validation_steps=20,
callbacks=[lr_callback,cp_callback])
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
The code I am currently using to load it again is the follow one.
import numpy as np
import tensorflow as tf
# Creates dummy data
train_x = np.random.randint(10,size=40).reshape(-1,1)
train_y = np.random.randint(2,size=40).reshape(-1,1)
train_set = (train_x,train_y)
val_x = np.random.randint(10,size=20).reshape(-1,1)
val_y = np.random.randint(2,size=20).reshape(-1,1)
val_set = (val_x,val_y)
# Set Learning Rate Decay
import math
def step_decay(epoch):
print('--- Epoch:',epoch)
print(tf.keras.callbacks.History())
init_lr = 0.001
drop = 0.9
epochs_drop = 1.0
lr = init_lr*math.pow(drop,math.floor((1+epoch)/epochs_drop))
return(lr)
lr_callback = tf.keras.callbacks.LearningRateScheduler(step_decay)
# Saves the whole model
cp_callback = tf.keras.callbacks.ModelCheckpoint('model.h5',
save_weights_only=False,
verbose=True)
# Load model
model = tf.keras.models.load_model('model.h5')
print('Learning Rate: ',tf.keras.backend.eval(model.optimizer.lr))
model.fit(x=train_set[0],y=train_set[1],epochs=2,steps_per_epoch=40,
validation_data=val_set,validation_steps=20,initial_epoch=3,
callbacks=[lr_callback,cp_callback])
As you can observe when running it is that the learning rate is restored hence the whole model as well, or at least that's what I think. However, after running model.fit(...) it does nothing but return <tensorflow.python.keras.callbacks.History object at 0x7f11c81cb940>. Any idea how to allow it to train again?
EDIT: I also tried to compile it by setting the compile attribute of load_model to true.
Did you try to compile it after loading?

How to save the model for text-classification in tensorflow?

Reading tensorflow documentation for text-classification, I have put up a script below that I used to train a model for text classification (positive/negative). I am not sure on one thing. How could I save the model to reuse it later? Also, how can I test for the input test-set I have?
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# Reduce logging output.
tf.logging.set_verbosity(tf.logging.ERROR)
train_df, test_df = download_and_load_datasets()
train_df.head()
# Training input on the whole training set with no limit on training epochs.
train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], num_epochs=None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df["polarity"], shuffle=False)
# Prediction on the test set.
predict_test_input_fn = tf.estimator.inputs.pandas_input_fn(
test_df, test_df["polarity"], shuffle=False)
embedded_text_feature_column = hub.text_embedding_column(
key="sentence",
module_spec="https://tfhub.dev/google/nnlm-en-dim128/1")
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=2,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003))
# Training for 1,000 steps means 128,000 training examples with the default
# batch size. This is roughly equivalent to 5 epochs since the training dataset
# contains 25,000 examples.
estimator.train(input_fn=train_input_fn, steps=1000);
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
test_eval_result = estimator.evaluate(input_fn=predict_test_input_fn)
print "Training set accuracy: {accuracy}".format(**train_eval_result)
print "Test set accuracy: {accuracy}".format(**test_eval_result)
Currently, if I run the above script it retrains the complete model. I want to reuse the model and have it output for some sample texts that I have. How could I do this?
I have tried the following to save:
sess = tf.Session()
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver()
saver.save(sess, 'test-model')
but this throws an error, saying Value Error: No variables to save
You can train and predict on a saved/loaded Estimator model simply by passing the model_dir parameter to both the Estimator instance and a tf.estimator.RunConfig instance that is passed to the config parameter of pre-made estimators (since about Tensorflow 1.4--still works on Tensorflow 1.12):
model_path = '/path/to/model'
run_config = tf.estimator.RunConfig(model_dir=model_path,
tf_random_seed=72, #Default=None
save_summary_steps=100,
# save_checkpoints_steps=_USE_DEFAULT, #Default=1000
# save_checkpoints_secs=_USE_DEFAULT, #Default=60
session_config=None,
keep_checkpoint_max=12, #Default=5
keep_checkpoint_every_n_hours=10000,
log_step_count_steps=100,
train_distribute=None,
device_fn=None,
protocol=None,
eval_distribute=None,
experimental_distribute=None)
classifier = tf.estimator.DNNLinearCombinedClassifier(
config=run_config,
model_dir=model_path,
...
)
You'll then be able to call classifier.train() and classifier.predict(), re-run the script skipping the classifier.train() call, and receive the same results after again calling classifier.predict().
This works using a hub.text_embedding_column feature column, and when using categorical_column_with_identity and embedding_column feature columns with a manually saved/restored VocabularyProcessor dictionary.

Categories