I am quite new to tensorflow and in order to learn to use it I am currently trying to implement a very simple DNNRegressor that predicts the movement of an object in 2D but I can't seem to the the predict function to work.
for this purpose I have some Input data - x and y coordinates of the object in a number of previous time steps. I want the output to a reasonable estimation of the location the object if it continues to move in the same direction with the same speed.
I am using tensorflow version 1.8.0
My regressor is defined like this:
CSV_COLUMN_NAMES = ['X_0', 'X_1', 'X_2', 'X_3', 'X_4', 'Y_0', 'Y_1', 'Y_2', 'Y_3', 'Y_4', 'Y_5']
my_feature_columns = []
for key in columnNames:
my_feature_columns.append(tf.feature_column.numeric_column(key=key))
regressor = estimator.DNNRegressor(feature_columns=my_feature_columns,
label_dimension=1,
hidden_units=hidden_layers,
model_dir=MODEL_PATH,
dropout=dropout,
config=test_config)
my input is, like the one in the tensorflow tutorial on premade estimators, a dict with the column as key.
An example for this input can be seen here.
regressor.train(arguments) and regressor.evaluate(arguments) seem to work just fine, but predict does not.
parallel to the code on the tensorflow site I tried to do this:
y_pred = regressor.predict(input_fn=eval_input_fn(X_test, labels=None, batch_size=1))
and it seems like that works as well.
The problem I'm facing now is that I can't get anything from that y_pred object.
when I enter print(y_pred) I get <generator object Estimator.predict at 0x7fd9e8899888> which would suggest to me that should be able to iterate over it but
for elem in y_pred:
print(elem)
results in TypeError: unsupported callable
Again, I'm quite new to this and I am sorry if the answer is obvious but I would be very grateful if someone could tell me what I'm doing wrong here.
The input_fn to regressor.predict should be a function. See the definition:
input_fn: A function that constructs the features.
You need to change your code to:
y_pred = regressor.predict(input_fn=eval_input_fn)
Related
I trained a set of LinearRegression models using the following GridSearchCV
MAX_COLUMNS=list(range(2, len(house_df.columns)))
X = house_df.drop(columns=['SalePrice'])
y = house_df.loc[:, 'SalePrice']
column_list = MAX_COLUMNS
# Box-cox transform the target
reg_strategy = TransformedTargetRegressor()
bcox_transformer = PowerTransformer(method='box-cox')
model_pipeline = Pipeline([("std_scaler", StandardScaler()),
('feature_selector', SelectKBest()),
('regress', reg_strategy)])
parameter_grid = [{'feature_selector__k' : column_list,
'feature_selector__score_func' : [f_regression, mutual_info_regression],
'regress__regressor' : [LinearRegression()],
'regress__regressor__fit_intercept' : [True],
'regress__transformer' : [None, bcox_transformer]}]
score_types = {'MSE' : 'neg_mean_squared_error', 'r2' : 'r2'}
gs = GridSearchCV(estimator=model_pipeline, param_grid=parameter_grid, scoring=score_types, refit='MSE', cv=5, n_jobs=5, verbose=1)
gs.fit(X, y)
PATH = './datasets/processed_data/'
gridsearch_result_filename = 'pfY_np10_nt2_rfS_ct0_8_st1_orY_ccY_LR1_GS.pkl'
full_path = PATH + gridsearch_result_filename
with open(full_path, 'wb') as file:
pickle.dump(gs, file)
I then load the trained GridSearch and can make predictions using the best estimator as follows:
with open(MODEL_PATH, 'rb') as file:
gs_results = pickle.load(file)
predictions = gs_results.predict(test_df)
The problem I am facing is that since the Box-Cox transform was applied during GridSearch, all of my predictions are in the Box-Cox transformed distributions domain (huge values).
I need to use the PowerTransformers inverse_transform() method on my predictions, but I am not sure how to access it.
I can get the entire pipeline for the best estimator like this
gs_results.best_estimator_
I can then access the TransformedTargetRegressor inside the pipeline like this:
Taking a step further, we get all the way to the PowerTransformer inside the TransformedTargetRegressor like this:
After making it here, I foolishly thought I had made it where I needed to be, and simply needed to use the PowerTransformers inverse_transform method to make predictions that would be back in the original units. However, much to my disappointment, an error is thrown:
The error seems pretty clear, telling me I cannot use the inverse_transform method because the PowerTransformer has not been fit.
This is where I am stumped. It doesn't make sense to say the PowerTransformer has not been fit, when clearly it was fit during the GridSearch process.
This makes me think I am simply accessing the PowerTransformer incorrectly, which is my current question.
Based on the set up above, does anyone know the correct way to take the inverse transform of my predictions so they are in the original units rather than the Box-Cox distributions units?
I have been banging my head against the wall for this and have searched all over for a similar question. Thank you so much in advance!
-Braden
Much like here, the attribute transformer is the unfitted initialization attribute; you need the fitted transformer_ attribute.
However, I'm not sure why predict doesn't already do what you want; the documentation for TransformedTargetRegressor.predict says
Predict using the base regressor, applying inverse.
The regressor is used to predict and the inverse_func or inverse_transform is applied before returning the prediction.
I'm struggling to implement a custom metric in Keras (2.4.3 with the tensorflow backend) such that I can trigger an early stopping mechanic. Essentially, I want to have Keras stop training a model should there be too big a decrease in the training loss function. To do this, I am using the following code:
def custom_metric(y_true,y_pred):
y=keras.losses.CategoricalCrossentropy(y_true,y_pred)
z=1.0/(1.0-y.numpy())
return z
model.compile(loss='categorical_crossentropy',
optimizer=opt,
metrics=['categorical_accuracy',custom_metric])
custom_stop = EarlyStopping(monitor='custom_metric',min_delta=0,patience=2,
verbose=1,mode='min',restore_best_weights=True)
I'm getting errors along the lines of AttributeError: 'CategoricalCrossentropy' object has no attribute 'numpy', which I understand is due to the definition of z, but I can't get something equivalent to work using by replacing the floats in the definition of z with tf.constants or anything like that. Does anyone have any suggestions?
Thanks a lot
Use this instead, mind the spelling:
keras.losses.categorical_crossentropy(y_true,y_pred)
This should work:
def custom_metric(y_true,y_pred):
y=keras.losses.categorical_crossentropy(y_true,y_pred)
z=1.0/(1.0-y)
return z
I want to use the external optimizer interface within tensorflow, to use newton optimizers, as tf.train only has first order gradient descent optimizers. At the same time, i want to build my network using tf.keras.layers, as it is way easier than using tf.Variables when building large, complex networks. I will show my issue with the following, simple 1D linear regression example:
import tensorflow as tf
from tensorflow.keras import backend as K
import numpy as np
#generate data
no = 100
data_x = np.linspace(0,1,no)
data_y = 2 * data_x + 2 + np.random.uniform(-0.5,0.5,no)
data_y = data_y.reshape(no,1)
data_x = data_x.reshape(no,1)
# Make model using keras layers and train
x = tf.placeholder(dtype=tf.float32, shape=[None,1])
y = tf.placeholder(dtype=tf.float32, shape=[None,1])
output = tf.keras.layers.Dense(1, activation=None)(x)
loss = tf.losses.mean_squared_error(data_y, output)
optimizer = tf.contrib.opt.ScipyOptimizerInterface(loss, method="L-BFGS-B")
sess = K.get_session()
sess.run(tf.global_variables_initializer())
tf_dict = {x : data_x, y : data_y}
optimizer.minimize(sess, feed_dict = tf_dict, fetches=[loss], loss_callback=lambda x: print("Loss:", x))
When running this, the loss just does not change at all. When using any other optimizer from tf.train, it works fine. Also, when using tf.layers.Dense() instead of tf.keras.layers.Dense() it does work using the ScipyOptimizerInterface. So really the question is what is the difference between tf.keras.layers.Dense() and tf.layers.Dense(). I saw that the Variables created by tf.layers.Dense() are of type tf.float32_ref while the Variables created by tf.keras.layers.Dense() are of type tf.float32. As far as I now, _ref indicates that this tensor is mutable. So maybe that's the issue? But then again, any other optimizer from tf.train works fine with keras layers.
Thanks
After a lot of digging I was able to find a possible explanation.
ScipyOptimizerInterface uses feed_dicts to simulate the updates of your variables during the optimization process. It only does an assign operation at the very end. In contrast, tf.train optimizers always do assign operations. The code of ScipyOptimizerInterface is not that complex so you can verify this easily.
Now the problem is that assigining variables with feed_dict is working mostly by accident. Here is a link where I learnt about this. In other words, assigning variables via feed dict, which is what ScipyOptimizerInterface does, is a hacky way of doing updates.
Now this hack mostly works, except when it does not. tf.keras.layers.Dense uses ResourceVariables to model the weights of the model. This is an improved version of simple Variables that has cleaner read/write semantics. The problem is that under the new semantics the feed dict update happens after the loss calculation. The link above gives some explanations.
Now tf.layers is currently a thin wrapper around tf.keras.layer so I am not sure why it would work. Maybe there is some compatibility check somewhere in the code.
The solutions to adress this are somewhat simple.
Either avoid using components that use ResourceVariables. This can be kind of difficult.
Patch ScipyOptimizerInterface to do assignments for variables always. This is relatively easy since all the required code is in one file.
There was some effort to make the interface work with eager (that by default uses the ResourceVariables). Check out this link
I think the problem is with the line
output = tf.keras.layers.Dense(1, activation=None)(x)
In this format output is not a layer but rather the output of a layer, which might be preventing the wrapper from collecting the weights and biases of the layer and feed them to the optimizer. Try to write it in two lines e.g.
output = tf.keras.layers.Dense(1, activation=None)
res = output(x)
If you want to keep the original format then you might have to manually collect all trainables and feed them to the optimizer via the var_list option
optimizer = tf.contrib.opt.ScipyOptimizerInterface(loss, var_list = [Trainables], method="L-BFGS-B")
Hope this helps.
I am currently using tf 1.4, and I need help looking at the predictions of a tf.contrib.factorization.KMeansClustering estimator. My current code segment looks like:
km = KMeansClustering(num_clusters=8,initial_clusters=KMeansClustering.KMEANS_PLUS_PLUS_INIT,model_dir=MODEL,relative_tolerance=0.01)
result = km.train(input_fn=lambda: gen_input(body))
input_fn = tf.estimator.inputs.pandas_input_fn(x={'x':tst}, shuffle=False)
y = result.predict(input_fn)
Where body and tst are pandas dataframes. print(y) gives:
<generator object Estimator.predict at 0x11ebecba0>
And trying things that I've searched up like calling print(list(y)), print(next(y)) or iterating through y like:
for i in y:
...
for i in y.items():
...
for i in enumerate(y):
...
etc, gives the error TypeError: data must be either a numpy array or pandas DataFrame if pandas is installed; got dict. I can't find any other ways to try and print this online. Thanks
Your code is too less to confirm on what's wrong/missing. Also at least a full stack trace is expected. This answer may prove to be well off after you add more information.
Is it that you're expecting the call to pandas_input_fn return something else than what you expect? It returns a function with signature ()->(dict of features, target) See docs for details.
Also, it you don't seem to be running a TensorFlow session. Until you do so, all tensors, computations (predictions in your case) etc are just part of a graph, they will have values only after running a TF session.
See these docs for more details.
I'm finally using my LSTM model to predict things. However, I've run into a new problem that I don't quite understand. If I try to predict something using
sess.run(pred, feed_dict={x: xs})
It works great for the first prediction, but any subsequent predictions throw the error:
ValueError: Variable weight_def/weights already exists, disallowed. Did you mean to set reuse=True in VarScope?
Now, there are a TON of topics on this - and most of them are easily solved by doing what it asks - just create a variable scope around the offending line and make variable reuse true. Now, if I do that I get the following error:
ValueError: Variable rnn_def/RNN/BasicLSTMCell/Linear/Matrix does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?
This is causing me quite the headache. I've read the Tensorflow Variable Sharing documentation over and over, and I can't for the life of me figure out what I am doing wrong. Here the offending lines
with tf.variable_scope("rnn_def"):
outputs, states = rnn.rnn(self.__lstm_cell,
self.__x,
dtype=tf.float32)
self.__outputs = outputs
self.__states = states
I have this code nested in a larger class that just contains the remainder of the graph. To train it, I just call my "train" method over and over again. Which seems to work fine, the problem ends up being prediction.
So my question is two fold:
Why do I require some sort of variable sharing only after the first prediction but the first call doesn't fail? What do I need to fix this code so I can predict more than once without causing an error?
When is variable sharing useful, and why is Tensorflow creating new variables each time I run it? How can I prevent this (do I want to prevent it?)?
Thank you!
Add a print statement to that block of code. I suspect it is being called multiple times. Or maybe you are creating multiple instances of the class in which each class should have its own scope name.
To answer your questions.
Why do I require some sort of variable sharing only after the first
prediction but the first call doesn't fail? What do I need to fix this
code so I can predict more than once without causing an error?
No you don't. That block of code creating the RNN is probably being accidentally called multiple times.
When is variable sharing useful, and why is Tensorflow creating new
variables each time I run it? How can I prevent this (do I want to
prevent it?)?
It is useful in the following case where I have different input sources for part of my graph depending on whether is is training or predicting.
x_train, h_train = ops.sharpen_cell(x_train, h_train, 2, n_features, conv_size, n_convs, conv_activation, 'upsampler')
self.cost += tf.reduce_mean((x_train - y_train) ** 2)
level_scope.reuse_variables()
x_gen, h_gen = ops.sharpen_cell(x_gen, h_gen, 2, n_features, conv_size, n_convs, conv_activation, 'upsampler')
self.generator_outputs.append(tf.clip_by_value(x_gen, -1, 1))
In this example is reuses the variables for the generator which were trained with the trainer. It is also useful if you want to unroll and RNN in a loop. Such as in this case...
y = #initial value
state = #initial state
rnn = #some sort of RNN cell
with tf.variable_scope("rnn") as scope:
for t in range(10):
y, state = rnn(y, state)
scope.reuse_variabled()
In this case it will reuse the rnn weights between time steps which is the desired behavior for an RNN.