I tried to create stacking regressor to predict multiple output with SVR and Neural network as estimators and final estimator is linear regression.
print(X_train.shape) #(73, 39)
print(y_train.shape) #(73, 13)
print(X_test.shape) #(19, 39)
print(y_test.shape) #(19, 13)
def build_nn():
ann = Sequential()
ann.add(Dense(40, input_dim=X_train.shape[1], activation='relu', name="Hidden_Layer_1"))
ann.add(Dense(y_train.shape[1], activation='sigmoid', name='Output_Layer'))
ann.compile( loss='mse', optimizer= 'adam', metrics = 'mse')
return ann
keras_reg = KerasRegressor(model = build_nn,optimizer="adam",optimizer__learning_rate=0.001,epochs=100,verbose=0)
stacker = StackingRegressor(estimators=[('svr',SVR()),('ann',keras_reg)], final_estimator= LinearRegression())
reg = MultiOutputRegressor(estimator=stacker)
model = reg.fit(X_train,y_train)
I am able to 'fit' the model. However, I got below problem when trying to predict.
prediction = reg.predict(X_test)
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 19 and the array at index 1 has size 247
Imo the point here is the following. On one side, NN models do support multi-output regression tasks on their own, which might be solved defining an output layer similar to the one you built, namely with a number of nodes equal to the number of outputs (though, with respect to your construction, I would specify a linear activation with activation=None rather than a sigmoid activation).
def build_nn():
ann = Sequential()
ann.add(Dense(40, input_dim=X_train.shape[1], activation='relu', name="Hidden_Layer_1"))
ann.add(Dense(y_train.shape[1], name='Output_Layer'))
ann.compile(loss='mse', optimizer= 'adam', metrics = 'mse')
return ann
On the other side, here, you're trying to solve your multi-output regression task by calling the MultiOutputRegressor constructor on a StackingRegressor instance, i.e. by explicitly training one regression model per output, the regression model being the combination of multiple regression models.
The issue arises from the concatenation of the predictions of the StackingRegressor base estimators and from their different shapes, in particular. Indeed:
the predictions of the MultiOutputRegressor instance are demanded to the StackingRegressor as you can see in https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09bcc2eaeba98f7e737aac2ac782f0e5f1/sklearn/multioutput.py#L234
in turn, in a StackingRegressor the predictions of each individual estimator are stacked together and used as input to a final_estimator to compute the prediction. .predict() is called on final_estimator in https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09bcc2eaeba98f7e737aac2ac782f0e5f1/sklearn/ensemble/_stacking.py#L267 (and in particular, you can see that it is taking the transformed X as input).
the transformed X is the result of the concatenation of the predictions of the StackingRegressor base estimators, as you can see in https://github.com/scikit-learn/scikit-learn/blob/7e1e6d09bcc2eaeba98f7e737aac2ac782f0e5f1/sklearn/ensemble/_stacking.py#L67.
This said, among the StackingRegressor base estimators you have an SVR() model which is designed not to be able to natively solve multi-output regression tasks and a KerasRegressor neural network which, defined as you did, is meant to be able to solve a multi-output regression task without delegating to MultiOutputRegressor. Therefore, what happens in _concatenate_predictions is that dimensionally-inconsistent predictions arise from SVR() (1D array of shape (19,)=(n_samples,) eventually reshaped into a (19,1) array) and from the KerasRegressor (2D array of shape (19,13)=(n_samples,n_outputs) eventually flattened and reshaped into a (19*13,1)=(247,1) array). This reflects the fact that letting your neural network output layer have a number of nodes equal to the number of outputs cannot fit into a StackingRegressor with another base estimator which should be necessarily extended via MultiOutputRegressor to be able to solve a multi-output regression task.
Therefore, for me, if you want to keep the same "architecture", you should let your neural network have an output layer with a single node so that its predictions can be concatenated with the ones from the SVR model and accessible to the StackingRegressor final_estimator and eventually delegate to MultiOutputRegressor.
from sklearn.datasets import make_regression
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from scikeras.wrappers import KerasRegressor
from sklearn.ensemble import StackingRegressor
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import LinearRegression
from sklearn.svm import SVR
X, y = make_regression(n_samples=92, n_features=39, n_informative=39, n_targets=13, random_state=42)
print(X.shape, y.shape)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
def build_nn():
ann = Sequential()
ann.add(Dense(40, input_dim=X_train.shape[1], activation='relu', name="Hidden_Layer_1"))
ann.add(Dense(1, name='Output_Layer'))
ann.compile(loss='mse', optimizer= 'adam', metrics = 'mse')
return ann
keras_reg = KerasRegressor(model = build_nn, optimizer="adam",
optimizer__learning_rate=0.001, epochs=100, verbose=0)
stacker = StackingRegressor(estimators=[('svr', SVR()), ('ann', keras_reg)], final_estimator = LinearRegression())
reg = MultiOutputRegressor(estimator=stacker)
reg.fit(X_train,y_train)
predictions = reg.predict(X_test)
I started learning how to use Keras. I have a raw file that encodes ASCII values of characters in a sentence with a corresponding product name. For example, abcd toothpaste cream would be classified as Toothpaste. The first two lines (out of ~150,000 lines) of the code is shown below. The file is also available for download here (this link will last two months from today).
12,15,11,31,30,15,0,26,28,15,29,30,19,17,15,0,19,24,30,15,28,24,11,30,0,18,19,17,19,15,24,15,0,35,0,12,15,22,22,15,36,11,0,12,15,22,22,15,36,11,0,16,28,11,17,11,24,13,19,11,29,0,16,15,23,15,24,19,24,11,29,0,11,36,36,15,14,19,24,15,0,11,36,36,15,14,19,24,15,11,22,11,19,11,0,26,15,28,16,31,23,15,0,16,15,23,15,24,19,24,25,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,Body Care Other
12,15,19,15,28,29,14,25,28,16,0,30,18,11,19,22,11,24,14,0,13,25,0,22,30,14,0,29,21,19,24,13,11,28,15,0,26,28,15,26,11,28,11,30,19,25,24,29,0,16,11,13,19,11,22,0,13,22,15,11,24,29,15,28,29,0,24,19,32,15,11,0,16,11,13,19,11,22,0,13,22,15,11,24,29,15,28,29,0,26,28,25,14,31,13,30,29,0,24,19,32,15,11,0,23,11,21,15,0,31,26,0,13,22,15,11,28,0,23,19,13,15,22,22,11,28,0,33,11,30,15,28,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,Skin Care Other
I am following a blog post where it uses a simple deep learning Keras model to do multi-class classification. I changed the configuration of the neural network to 243 inputs --> [100 hidden nodes] --> 67 outputs (because I have 67 classes to classify). The code is below:
import numpy
import pandas
from keras.models import Sequential
from keras.layers import Dense
from keras.wrappers.scikit_learn import KerasClassifier
from keras.utils import np_utils
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.preprocessing import LabelEncoder
def baseline_model():
model = Sequential()
# I changed it to 243 inputs --> [100 hidden nodes] --> 67 outputs (because I have 67 classes to classify)
model.add(Dense(100, input_dim=X_len, activation='relu'))
model.add(Dense(Y_cnt, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
seed = 7
numpy.random.seed(seed)
# load dataset
dataframe = pandas.read_csv("./input/raw_mappings.csv", header=None)
dataset = dataframe.values
X_len = len(dataset[0,:-1])
X = dataset[:,0:X_len].astype(float)
Y = dataset[:,X_len]
Y_cnt = len(numpy.unique(Y))
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# convert integers to dummy variables (i.e. one hot encoded)
dummy_y = np_utils.to_categorical(encoded_Y)
estimator = KerasClassifier(build_fn=baseline_model, epochs=200, batch_size=5, verbose=0)
kfold = KFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(estimator, X, dummy_y, cv=kfold)
print("Baseline: %.2f%% (%.2f%%)" % (results.mean()*100, results.std()*100))
But it never seems to finish when I ran it on my desktop computer for more than 12 hours. I'm starting to think there is almost nothing going on. Is there something that I'm doing wrong with either the configuration of the neural network or the problem I'm trying to solve (meaning, maybe Sequential model is not the right way to go for classifying >60 classes?).
Any pointer or tip would be greatly appreciated. Thank you.
I have the following code, using Keras Scikit-Learn Wrapper:
from keras.models import Sequential
from sklearn import datasets
from keras.layers import Dense
from sklearn.model_selection import train_test_split
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn import preprocessing
import pickle
import numpy as np
import json
def classifier(X, y):
"""
Description of classifier
"""
NOF_ROW, NOF_COL = X.shape
def create_model():
# create model
model = Sequential()
model.add(Dense(12, input_dim=NOF_COL, init='uniform', activation='relu'))
model.add(Dense(6, init='uniform', activation='relu'))
model.add(Dense(1, init='uniform', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# evaluate using 10-fold cross validation
seed = 7
np.random.seed(seed)
model = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10, verbose=0)
return model
def main():
"""
Description of main
"""
iris = datasets.load_iris()
X, y = iris.data, iris.target
X = preprocessing.scale(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
model_tt = classifier(X_train, y_train)
model_tt.fit(X_train,y_train)
#--------------------------------------------------
# This fail
#--------------------------------------------------
filename = 'finalized_model.sav'
pickle.dump(model_tt, open(filename, 'wb'))
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
result = loaded_model.score(X_test, Y_test)
print(result)
#--------------------------------------------------
# This also fail
#--------------------------------------------------
# from keras.models import load_model
# model_tt.save('test_model.h5')
#--------------------------------------------------
# This works OK
#--------------------------------------------------
# print model_tt.score(X_test, y_test)
# print model_tt.predict_proba(X_test)
# print model_tt.predict(X_test)
# Output of predict_proba
# 2nd column is the probability that the prediction is 1
# this value is used as final score, which can be used
# with other method as comparison
# [ [ 0.25311464 0.74688536]
# [ 0.84401423 0.15598579]
# [ 0.96047372 0.03952631]
# ...,
# [ 0.25518912 0.74481088]
# [ 0.91467732 0.08532269]
# [ 0.25473493 0.74526507]]
# Output of predict
# [[1]
# [0]
# [0]
# ...,
# [1]
# [0]
# [1]]
if __name__ == '__main__':
main()
As stated in the code there it fails at this line:
pickle.dump(model_tt, open(filename, 'wb'))
With this error:
pickle.PicklingError: Can't pickle <function create_model at 0x101c09320>: it's not found as __main__.create_model
How can I get around it?
Edit 1 : Original answer about saving model
With HDF5 :
# saving model
json_model = model_tt.model.to_json()
open('model_architecture.json', 'w').write(json_model)
# saving weights
model_tt.model.save_weights('model_weights.h5', overwrite=True)
# loading model
from keras.models import model_from_json
model = model_from_json(open('model_architecture.json').read())
model.load_weights('model_weights.h5')
# dont forget to compile your model
model.compile(loss='binary_crossentropy', optimizer='adam')
Edit 2 : full code example with iris dataset
# Train model and make predictions
import numpy
import pandas
from keras.models import Sequential, model_from_json
from keras.layers import Dense
from keras.utils import np_utils
from sklearn import datasets
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
# fix random seed for reproducibility
seed = 7
numpy.random.seed(seed)
# load dataset
iris = datasets.load_iris()
X, Y, labels = iris.data, iris.target, iris.target_names
X = preprocessing.scale(X)
# encode class values as integers
encoder = LabelEncoder()
encoder.fit(Y)
encoded_Y = encoder.transform(Y)
# convert integers to dummy variables (i.e. one hot encoded)
y = np_utils.to_categorical(encoded_Y)
def build_model():
# create model
model = Sequential()
model.add(Dense(4, input_dim=4, init='normal', activation='relu'))
model.add(Dense(3, init='normal', activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
def save_model(model):
# saving model
json_model = model.to_json()
open('model_architecture.json', 'w').write(json_model)
# saving weights
model.save_weights('model_weights.h5', overwrite=True)
def load_model():
# loading model
model = model_from_json(open('model_architecture.json').read())
model.load_weights('model_weights.h5')
model.compile(loss='categorical_crossentropy', optimizer='adam')
return model
X_train, X_test, Y_train, Y_test = train_test_split(X, y, test_size=0.3, random_state=seed)
# build
model = build_model()
model.fit(X_train, Y_train, nb_epoch=200, batch_size=5, verbose=0)
# save
save_model(model)
# load
model = load_model()
# predictions
predictions = model.predict_classes(X_test, verbose=0)
print(predictions)
# reverse encoding
for pred in predictions:
print(labels[pred])
Please note that I used Keras only, not the wrapper. It only add some complexity in something simple. Also code is volontary not factored so you can have the whole picture.
Also, you said you want to output 1 or 0. It is not possible in this dataset because you have 3 output dims and classes (Iris-setosa, Iris-versicolor, Iris-virginica). If you had only 2 classes then your output dim and classes would be 0 or 1 using sigmoid output fonction.
Just adding to gaarv's answer - If you don't require the separation between the model structure (model.to_json()) and the weights (model.save_weights()), you can use one of the following:
Use the built-in keras.models.save_model and 'keras.models.load_model` that store everything together in a hdf5 file.
Use pickle to serialize the Model object (or any class that contains references to it) into file/network/whatever..
Unfortunetaly, Keras doesn't support pickle by default. You can use
my patchy solution that adds this missing feature. Working code is
here: http://zachmoshe.com/2017/04/03/pickling-keras-models.html
Another great alternative is to use callbacks when you fit your model. Specifically the ModelCheckpoint callback, like this:
from keras.callbacks import ModelCheckpoint
#Create instance of ModelCheckpoint
chk = ModelCheckpoint("myModel.h5", monitor='val_loss', save_best_only=False)
#add that callback to the list of callbacks to pass
callbacks_list = [chk]
#create your model
model_tt = KerasClassifier(build_fn=create_model, nb_epoch=150, batch_size=10)
#fit your model with your data. Pass the callback(s) here
model_tt.fit(X_train,y_train, callbacks=callbacks_list)
This will save your training each epoch to the myModel.h5 file. This provides great benefits, as you are able to stop your training when you desire (like when you see it has started to overfit), and still retain the previous training.
Note that this saves both the structure and weights in the same hdf5 file (as showed by Zach), so you can then load you model using keras.models.load_model.
If you want to save only your weights separately, you can then use the save_weights_only=True argument when instantiating your ModelCheckpoint, enabling you to load your model as explained by Gaarv. Extracting from the docs:
save_weights_only: if True, then only the model's weights will be saved (model.save_weights(filepath)), else the full model is saved (model.save(filepath)).
The accepted answer is too complicated. You can fully save and restore every aspect of your model in a .h5 file. Straight from the Keras FAQ:
You can use model.save(filepath) to save a Keras model into a single
HDF5 file which will contain:
the architecture of the model, allowing to re-create the model
the weights of the model
the training configuration (loss, optimizer)
the state of the optimizer, allowing to resume training exactly where you left off.
You can then use keras.models.load_model(filepath) to reinstantiate your model. load_model will also take care of compiling the model using the saved training configuration (unless the model was never compiled in the first place).
And the corresponding code:
from keras.models import load_model
model.save('my_model.h5') # creates a HDF5 file 'my_model.h5'
del model # deletes the existing model
# returns a compiled model identical to the previous one
model = load_model('my_model.h5')
In case your keras wrapper model is in a scikit pipeline, you save steps in the pipeline separately.
import joblib
from sklearn.pipeline import Pipeline
from tensorflow import keras
# pass the create_cnn_model function into wrapper
cnn_model = keras.wrappers.scikit_learn.KerasClassifier(build_fn=create_cnn_model)
# create pipeline
cnn_model_pipeline_estimator = Pipeline([
('preprocessing_pipeline', pipeline_estimator),
('clf', cnn_model)
])
# train model
final_model = cnn_model_pipeline_estimator.fit(
X, y, clf__batch_size=32, clf__epochs=15)
# collect the preprocessing pipeline & model seperately
pipeline_estimator = final_model.named_steps['preprocessing_pipeline']
clf = final_model.named_steps['clf']
# store pipeline and model seperately
joblib.dump(pipeline_estimator, open('path/to/pipeline.pkl', 'wb'))
clf.model.save('path/to/model.h5')
# load pipeline and model
pipeline_estimator = joblib.load('path/to/pipeline.pxl')
model = keras.models.load_model('path/to/model.h5')
new_example = [[...]]
# transform new data with pipeline & use model for prediction
transformed_data = pipeline_estimator.transform(new_example)
prediction = model.predict(transformed_data)