Partial Dependence Plot for a Keras Neural Network - python

I have a densely connected neural network that was built using the Keras Sequential API. I'm trying to create some partial dependence plots (PDP's) to use for a bit a sensitivity analysis. I am attempting to use the scikit-learn plot_partial_dependence function in order to do this. I've been getting the following error: ValueError: 'estimator' must be a fitted regressor or classifier.. When it first happened, I added the use of KerasClassifier. I've used it successfully in the past to use my Keras model in scikit-learn GridSearchCV. I'm still getting the same error. I've also tried KerasRegressor.
Can anyone tell me what's wrong and how I could fix it? Do I absolutely need to use scikit-learn's decision tree based functions to be able to use the PDP function? If yes, what's the biggest implementation difference between Keras neural networks and decision trees? (I've never used decision trees. My machine learning experience is limited to Keras.)
My relevant code is below and I'm running python on google colab's GPU. I'm sure there are several issues in that last line but I can't get past this one to figure them out.
from sklearn.inspection import plot_partial_dependence
from keras.wrappers.scikit_learn import KerasClassifier
from keras.wrappers.scikit_learn import KerasRegressor
def create_model():
def swish(x):
return (x*sigmoid(x))
from keras.utils.generic_utils import get_custom_objects
from keras.layers import Activation
get_custom_objects().update({'swish':(swish)})
model=Sequential()
model.add(Dense(1024,activation='swish',input_shape=(6,)))
model.add(Dropout(.1))
model.add(Dense(512,activation='swish'))
model.add(Dense(256,activation='swish'))
model.add(Dropout(.1))
model.add(Dense(128,activation='swish'))
model.add(Dense(64,activation='swish'))
model.add(Dropout(.1))
model.add(Dense(32,activation='swish'))
model.add(Dense(16,activation='swish'))
model.add(Dropout(.1))
model.add(Dense(12, activation='softmax'))
opt=optimizers.Adam(lr=0.05)
model.compile(loss='categorical_crossentropy',optimizer='adam', metrics=['accuracy'])
return model
from keras.callbacks import LearningRateScheduler
from keras.callbacks import EarlyStopping
import math
def scheduler(epoch, lr):
if epoch < 20:
return lr
else:
return lr * math.exp(-0.1)
callback=LearningRateScheduler(scheduler, verbose=1)
weightsCallback=EarlyStopping(patience=30,monitor='accuracy',restore_best_weights=True, min_delta=1*10**-5, verbose=1)
modelClassified=KerasClassifier(build_fn=create_model)
modelClassified.fit(X_train, Y_train, batch_size=50, epochs=50, callbacks=[callback,weightsCallback], verbose=1)
disp=plot_partial_dependence(modelClassified, X_holdout,target=1, verbose =1, features=[0,1,2,3,4,5], feature_names=['aspect ratio','diel inner radius','diel outer radius','diel seperation','diel height','diel constant'])

I found that this error is actually a bug. My program should have, by all means, worked just fine. There is an error in the plot_partial_dependence function source code.
For much more detail and the solution I used to make it work, see this link to another StackOverflow question: https://stackoverflow.com/a/61485502/13822019

Related

How can I sort a neural network layer in Keras?

I am working on a multi-target regression problem in Keras and I would like the predicted values in the last layer to be sorted. I am currently implementing something like this:
# Lambda layer
import tensorflow as tf
def sort_layer(tensor):
return tf.sort(tensor)
# Training model on train set
from keras.models import Sequential
from keras.layers import Dense,Lambda
model = Sequential()
model.add(Dense(100,input_dim=X_train.shape[1],activation="relu"))
model.add(Dense(150,activation="relu"))
model.add(Dense(50,activation="relu"))
model.add(Dense(y_train.shape[1],activation="linear"))
model.add(Lambda(sort_layer))
model.compile(loss="mse", optimizer="adam")
model.fit(X_train,y_train, epochs=100,batch_size=10, verbose=0)
This doesn't seem to be working every time as some predictions don't come out sorted. Can anyone explain what I am doing wrong and suggest a good fix?
Thank you!

Keras: model.fit() and model.fit_generator() return history objects. How do I get Keras models?

I'm doing a guided RNN project. I'm using a textbook to guide me but I'm doing a lot of things on my own. I've encountered an issue from the fact that history, below, is not a Keras model but rather a history object.
from keras.models import Sequential
from keras import layers
from keras.optimizers import RMSprop
from keras.layers import LSTM
model = Sequential()
model.add(layers.Flatten(input_shape=(7,data.shape[-1])))
model.add(layers.Dense(32,activation='relu'))
model.add(layers.Dense(1))
val_steps = 99999//20
model.compile(optimizer=RMSprop(),loss='mae')
history = model.fit_generator(trainGen,
steps_per_epoch=250,
epochs=20,
validation_data=valGen,
validation_steps=val_steps,
use_multiprocessing=False)
The error occurs when I type the below due to the fact that history is a History object. Is there a way to extract a keras object? Thank you in advance.
predictions = history.predict(testData)
Sorry I can't comment yet. Why are you calling predict on the history and not the model itself?
predictions = model.predict(testData)

Randomness of LSTM model

I have one LSTM model like below:
model = Sequential()
model.add(Conv1D(3, 32, input_shape=(60, 12)))
model.add(LSTM(units=256, return_sequences=False, dropout=0.25))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')
model.summary()
Each time I use the same dataset to train it, I will get a different model. Most of the time, the performance of the trained model is acceptable, but sometime is really bad. I think that there are some randomness during the training or initialization. So how can I fix everything to get same model for each training?
I've experienced this problem with Keras as well, it has to do with the random seed, you can fix your random seed like this before importing the Keras, so that you could get the consistent result.
import numpy as np
np.random.seed(1000)
import os
import random
os.environ['PYTHONHASHSEED'] = '0'
random.seed(12345)
# Also set the tf randomness to some fixed values like this if you need:
tf.set_random_seed(1234)
This worked for me.
Weights are initialized randomly in neural networks, so it is possibly to get different results by design. If you think about how backpropagation works and how the cost function is minimized, you will notice that you donĀ“t have any guarantee that your network will find the "global minima". Fixing the seed is one idea to get reproducible results, but on the other hand you limit your network to a fixed starting position, where it probably will never reach the global minima.
A lot of complex models, especially LSTMs are unstable. You could look at convolutional approaches. I noticed, they are performing almost equally and are much more stable.
https://arxiv.org/pdf/1803.01271.pdf
You can save it
from keras.models import load_model
model.save("lstm_model.h5")
And load it later on
model = model.load("lstm_model.h5")

How does exactly fitting with SGD in Keas works?

I'm a newbie in a Neural Networks. I'm doing my university NN project in Keras. I assembled and trained the one-layer sequential model using SGD optimizer:
[...]
nn_model = Sequential()
nn_model.add(Dense(32, input_dim=X_train.shape[1], activation='tanh'))
nn_model.add(Dense(1, activation='tanh'))
sgd = keras.optimizers.SGD(lr=0.001, momentum=0.25)
nn_model.compile(loss='mean_squared_error', optimizer=sgd, metrics=['accuracy'])
history = nn_model.fit(X_train, Y_train, epochs=2000, verbose=2, validation_data=(X_test, Y_test))
[...]
I've tried various learning rate, momentum, neurons and get satisfying accuracy and error results. But, I have to know how keras works. So could you please explain me how exactly fitting in keras works, because I can't find it in Keras documentation?
How does Keras update weights? Does it use a backpropagation algorithm? (I'm 95% sure.)
How SGD algorithm is implemented in Keras? Is it similar to Wikipedia explanation?
How Keras exactly calculate a gradient?
Thank you kindly for any information.
Let's try to break it down and I'll cover only Keras specific bits:
How does Keras update weights? Using an Optimiser which is a base class for different optimisers. Each optimiser calculates the new weights, under a function get_updates which returns a list of functions when run applies the updates.
Back-propagation? Yes, but Keras doesn't implement it directly, it leaves it for the backend tensor libraries to perform automatic differentiation. For example K.gradients calls tf.gradients in the Tensorflow backend.
SGD algorithm? It is implemented as expected on Wikipedia in the SGD class with the basic extensions such as momentum. You can follow the code easily and how it calculates the updates.
How gradient is calculated? Using back-propagation.

Keras Visualization of Model Built from Functional API

I wanted to ask if there was an easy way to visualize a Keras model built from the Functional API?
Right now, the best ways to debug at a high level a sequential model for me is:
model = Sequential()
model.add(...
...
print(model.summary())
SVG(model_to_dot(model).create(prog='dot', format='svg'))
However, I am having a hard time finding a good way to visualize the Keras API if we build a more complex, non-sequential model.
Yes there is, try checking the keras.utils which has a method plot_model() as explained on detail here. Seems that you already are familiar with keras.utils.vis_utils and the model_to_dot method, but this is another option. It's usage is something like:
from keras.utils import plot_model
plot_model(model, to_file='model.png')
To be honest, that is the best I have managed to find using Keras only. Using model.summary() as you did is also useful sometimes. I also wished there were some tool to enable for better visualization of one's models, perhaps even to be able to see the weights per layers as to decide on optimal network structures and initializations (if you know about one please tell :] ).
Probably the best option you currently have is to visualize things on Tensorboard, which you an include in Keras with the TensorBoard Callback. This enables you to visualize your training and the metrics of interest, as well as some info on activations of your layers,your biases and kernels, etc.. Basically you have to add this code to your program, before fitting your model:
from keras.callbacks import TensorBoard
#indicate folder to save, plus other options
tensorboard = TensorBoard(log_dir='./logs/run1', histogram_freq=1,
write_graph=True, write_images=False)
#save it in your callback list, where you can include other callbacks
callbacks_list = [tensorboard]
#then pass to fit as callback, remember to use validation_data also
regressor.fit(X, Y, callbacks=callbacks_list, epochs=64,
validation_data=(X_test, Y_test), shuffle=True)
You can then run Tensorboard (which runs locally on a webservice) with the following command on your terminal:
tensorboard --logdir=/logs/run1
This will then indicate you in which port to visualize your training. If you got different runs you can pass --logdir=/logs instead to be able to visualize them together for comparison. There are of course more options on the use of Tensorboard, so I suggest you check the included links if you are considering its use.
After a bit of googling and trial/error... Turns out you have to just convert the entire functional api model back into a "model format".
model = some_model()
output_layer = _build_output()
finalmodel = Model(inputs=model.input, outputs=finalmodel)
then, you can run finalmodel.summary(), or any of the plotting features for sequential modeling.
However, this requires I guess careful tracking of the model, which I admittedly did not do.
tf.keras.utils.plot_model(
model,
to_file="model.png",
show_shapes=False,
show_layer_names=True,
rankdir="TB",
expand_nested=False,
dpi=96,
)

Categories