Handling tensorflow session in a class - python

I'm using tensorflow to predict outputs of a neural network. I have a class where I have described the neural network and I have a main file where the predictions are being made, and based on the results, the weights are updated. However, the predictions seem to be really slow. Here is how my code looks like:
class NNPredictor():
def __init__(self):
self.input = tf.placeholder(...)
...
self.output = (...) #Neural network output
def predict_output(self, sess, input):
return sess.run(tf.squeeze(self.output), feed_dict = {self.input: input})
Here's how the main file looks like:
sess = tf.Session()
predictor = NNPredictor()
input = #some initial value
for i in range(iter):
output = predictor.predict_output(sess, input)
input = #some function of output
However, if I use the following function definition in the class:
def predict_output(self):
return self.output
And have the main file as follows:
sess = tf.Session()
predictor = NNPredictor()
input = #some initial value
output_op = predictor.predict_value()
for i in range(iter):
output = np.squeeze(sess.run(output_op, feed_dict = {predictor.input: input}))
input = #some function of output
The code runs almost 20-30x faster. I don't seem to understand how things are working here, and I'd like to know what the best practice would be.

This has to do with the underlying memory accesses masked by Python. Here's some sample code to illustrate this idea:
import time
runs = 10000000
class A:
def __init__(self):
self.val = 1
def get_val(self):
return self.val
# Using method to then call object attribute
obj = A()
start = time.time()
total = 0
for i in xrange(runs):
total += obj.get_val()
end = time.time()
print end - start
# Using object attribute directly
start = time.time()
total = 0
for i in xrange(runs):
total += obj.val
end = time.time()
print end - start
# Assign to local_var first
start = time.time()
total = 0
local_var = obj.get_val()
for i in xrange(runs):
total += local_var
end = time.time()
print end - start
On my computer, it runs in the following timing:
1.49576115608
0.656110048294
0.551875114441
Specific to your case, you're calling the object method in the first case but not doing it in the second case. If you're calling your code many times in this way, there would be significant performance differences.

Related

Creating a new object every time you call a function in python

I have a recommender system that I need to train, and I included the entire training procedure inside a function:
def train_model(data):
model = Recommender()
Recommender.train(data)
pred = Recommender.predict(data)
return pred
something like this. Now if I want to train this inside a loop, for different datasets, like:
preds_list = []
data_list = [dataset1, dataset2, dataset3...]
for data_subset in data_list:
preds = train_model(data_subset)
preds_list += [preds]
How can I make sure that every time I call the train_model function, a brand new instance of a recommender is created, not an old one, trained on the previous dataset?
You are already creating a new instance everytime you execute train_model. The thing you are not using the new instance.
You probably meant:
def train_model(data):
model = Recommender()
model.train(data)
pred = model.predict(data)
return pred
Use the instance you've instantiated, not the class
class Recommender:
def __init__(self):
self.id = self
def train(self, data):
return data
def predict(self, data):
return data + str(self.id)
def train_model(data):
model = Recommender()
model.train(data)
return model.predict(data)
data = 'a data '
x = {}
for i in range(3):
x[i] = train_model(data)
print(x[i])
# a data <__main__.Recommender object at 0x11cefcd10>
# a data <__main__.Recommender object at 0x11e0471d0>
# a data <__main__.Recommender object at 0x11a064d50>

Can tf.cond() be used to dynamically define the graph

To build a test case code, I simply want to pass a tensor to a graph and have the model decide whether to multiply by 2 once or twice. The decision is based on evaluation of a boolean. Here is the code:
class what_to_do():
def __init__(self):
self.conditionally_determined_A = None
self.conditionally_determined_B = None
self.truth_values = tf.placeholder(shape=[None, 1],
dtype=tf.bool)
self.input = tf.placeholder(shape=[None, 1],
dtype=tf.float32)
# The question is, can this statement evaluate the conditional and then direct the
# implementation of the model's components.
_ = tf.cond(tf.constant(True),
lambda: self.real_conditional(),
lambda: self.fake_condition())
self.input_A = tf.Variable(initial_value=self.conditionally_determined_A,
trainable=False,
validate_shape=False)
self.const_A = tf.constant([2.])
self.operation_A1 = tf.multiply(self.input_A, self.const_A)
self.input_B = tf.Variable(initial_value=self.conditionally_determined_B,
trainable=False,
validate_shape=False)
self.const_B = tf.constant([2.])
self.operation_B1 = tf.multiply(self.input_B, self.const_B)
self.output = self.operation_B1
# These functions will serve as the condition coordinators for the model
def real_conditional(self):
print('in loop')
self.conditionally_determined_B = self.input
self.conditionally_determined_A = None
return 1
def fake_condition(self):
print('not in loop')
self.conditionally_determined_B = None
self.conditionally_determined_A = self.input
return 0
tf.reset_default_graph()
model = what_to_do()
data_set = np.array([[2,True], [2,True], [2,True], [2,False], [2,False], [2,False], [2,False]])
i, t = np.split(ary=data_set,
indices_or_sections=2,
axis=-1)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
output = sess.run(fetches=[model.output],
feed_dict={model.input_A:i, model.truth_values:t})
I've run into trouble trying to get tf.cond() to handle the tensor. It complains about ranks and the likes (perhaps another question).
What I have done to the code is to just define the condition as True or False, thus executing the proper function. If it is True, the graph should start at input_B and not worry about input_A at all.
Any help setting up tf.cond() to manipulate the graph dynamically?

How to use model in batch generator?

I want to use model.predict in batch generator, what a possible ways of achieve this?
Seems one option is to load model on init and on epoch end:
class DataGenerator(keras.utils.Sequence):
def __init__(self, model_name):
# Load model
# ...
def on_epoch_end(self):
# Load model
In my experience, predicting another model while training will bring errors.
You should probably simply append your training model after your generator model.
Suppose you have:
generator_model (the one you want to use inside the generator)
training_model (the one you want to train)
Then
generatorInput = Input(shapeOfTheGeneratorInput)
generatorOutput = generator_model(generatorInput)
trainingOutput = training_model(generatorOutput)
entireModel = Model(generatorInput,trainingOutput)
Make sure that the generator model has all layers untrainable before compiling:
genModel = entireModel.layers[1]
for l in genModel.layers:
l.trainable = False
entireModel.compile(optimizer=optimizer,loss=loss)
Now use the generator regularly.
Predicting inside the generator:
class DataGenerator(keras.utils.Sequence):
def __init__(self, model_name, modelInputs, batchSize):
self.genModel = load_model(model_name)
self.inputs = modelInputs
self.batchSize = batchSize
def __len__(self):
l,rem = divmod(len(self.inputs), self.batchSize)
return (l + (1 if rem > 0 else 0))
def __getitem__(self,i):
items = self.inputs[i*self.batchSize:(i+1)*self.batchSize]
items = doThingsWithItems(items)
predItems = self.genModel.predict_on_batch(items)
#the following is the only reason not to chain models
predItems = doMoreThingsWithItems(predItems)
#do something to get Y_train_items as well
return predItems, y_train_items
If you do find the error I mentioned, you can sacrifice the parallel generation capabilities and do some manual loops:
for e in range(epochs):
for i in range(batches):
x,y = generator[i]
model.train_on_batch(x,y)

Python pass multiple classes to function/method/

I am trying to write the derivative function given in pseudo code in my mwe below. It is supposed to calculate the numerical derivative of the cost of the prediction of my neural network with respect to a parameter of one of its layers.
My problem is I don't know how to pass and access an instance of NeuralNetwork and an instance of Layer from within the function (or method?) at the same time.
Looking into e.g. Passing a class to another class (Python) did not provide an answer to me.
import copy
class NeuralNetwork:
def __init__(self):
self.first_layer = Layer()
self.second_layer = Layer()
def cost(self):
# not the actual cost but not of interest
return self.first_layer.a + self.second_layer.a
class Layer:
def __init__(self):
self.a = 1
''' pseudocode
def derivative(NeuralNetwork, Layer):
stepsize = 0.01
cost_unchanged = NeuralNetwork.cost()
NN_deviated = copy.deepcopy(NeuralNetwork)
NN_deviated.Layer.a += stepsize
cost_deviated = NN_deviated.cost()
return (cost_deviated - cost_unchanged)/stepsize
'''
NN = NeuralNetwork()
''' pseudocode
derivative_first_layer = derivative(NN, first_layer)
derivative_second_layer = derivative(NN, second_layer)
'''

How to take the output of one model as the input of another one with Tensorflow r-1.0?

I have defined two classes of models, x and y.
class x():
def __init__(self, x_inp1, x_inp2):
# do sth...
def step(self, session, encoder_inputs):
input_feed = {}
for l in range(encoder_size):
input_feed[self.encoder_inputs[l].name] = encoder_inputs[l]
...
output_feed = [x_output]
return session.run(x_output)
class y():
def __init__(self, y_inp1, y_inp2):
# do sth...
def step(self, encoder_inputs):
input_feed = {}
for l in range(encoder_size):
input_feed[self.encoder_inputs[l].name] = encoder_inputs[l]
...
They have quite similar functions. And then I define another class to group them up.
class gp():
def __init__(self, x_inp1, x_inp2, y_inp1, y_inp2):
with tf.variable_scope('x'):
self.x_model = x(x_inp1, x_inp2)
with tf.variable_scope('y'):
self.y_model = y(y_inp1, y_inp2)
def step(self, session, encoder_inputs):
x_output = self.x_model.step(session, encoder_inputs)
y_output = self.y_model.step(session, x_output)
...
Please notice that the y_model takes the output of x_model as input. And I run the gp() in the main function:
with tf.Session() as sess:
gp_m = gp(x_inp1, x_inp2, y_inp1, y_inp2)
gp_m.step(sess, x_inp1, x_inp2, y_inp1, y_inp2)
And after running x_output = self.x_model.step(encoder_inputs) and begin to do y_output = self.y_model.step(x_output), I got such an error:
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'x/encoder0' with dtype int32
[[Node: x/encoder0 = Placeholder[dtype=DT_INT32, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Please notice this error points to the x_model even the step function of it has been finished. I wonder how can I use the output of x_model as the input of y_model without any error? Thanks in advance!
You should defer the calls to session.run to be outside the step functions. The problem here is that trying to run Y triggers X because they are connected in the graph.
Instead, it might be better to fully separate your graph build and graph run stages of your program, so you know what placeholders to provide when.

Categories