I am making the binary sound classification model by Keras on Python3.7. I have been make the sound classification model on MATLAB however some specifically layer is not installed on MATLAB (ex. GRU). So I try to convert to Keras deep learning model from MATLAB deep learning model.
The original MATLAB code is shown bellow:
inputsize=[31,69]
layers = [ ...
sequenceInputLayer(inputsize(1))
bilstmLayer(200,'OutputMode','last')
fullyConnectedLayer(2)
softmaxLayer
classificationLayer
]
options = trainingOptions('adam', ...
'MaxEpochs',30, ...
'MiniBatchSize', 200, ...
'InitialLearnRate', 0.01, ...
'GradientThreshold', 1, ...
'ExecutionEnvironment',"auto",...
'plots','training-progress', ...
'Verbose',false);
This model get to the accuracy is 0.955.
The Keras code based on MATLAB code is shown below:
# traindatasize=(86400,31,69)
inputsize=(31,69)
batchsize=200
epochs=30
model = Sequential()
model.add(Bidirectional(LSTM(200, input_shape=inputsize)))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer=RMSprop(), loss='binary_crossentropy', metrics=['accuracy'])
model.fit(traindata, trainlabel, batch_size=batchsize, epochs=epochs, verbose=1)
This model get to the accuracy is 0.444
I don't understand what is the effect.
The traindata used same data from STFT and normalize before train those model using standard deviation and mean average.
Please some comments.
Python 3.7 on Anaconda
Keras 2.2.4
I think that's because the MATLAB code uses the Adam optimizer for training, and you defined RMSprop instead in:
model.compile(optimizer=RMSprop(),loss='binary_crossentropy',metrics=['accuracy'])
instead, use:
from keras import optimizers
adam = optimizers.Adam(learning_rate=0.001, beta_1=0.9, beta_2=0.999, amsgrad=False)
...
model.compile(optimizer=adam,loss='binary_crossentropy',metrics=['accuracy'])
check if this improves the answer.
Related
I want to save a trained Keras model, so that it can be used in the Django REST backend of an application. I did a lot of research, but it seems there isn't any way to use these models without TensorFlow installed.
So, what is the use of this storage? I don't want to install a heavy library like TensorFlow on the server. I tested saving with pickle and joblib, as well as Keras' own model.save().
Is there a way to load this model without installing TensorFlow and only with Keras itself?
This is a part of my code,
from keras.models import Sequential
from keras.layers import Dense, LSTM, Dropout
xtrain, ytrain = np.array(xtrain), np.array(ytrain)
ytrain = np.reshape(ytrain, (ytrain.shape[0], 1, 1))
model = Sequential()
model.add(LSTM(150, return_sequences=True, input_shape=(xtrain.shape[1], 1)))
model.add(LSTM(150, return_sequences=False))
model.add(Dense(25))
model.add(Dense(1))
model.compile(optimizer='adam', loss='mean_squared_error')
model.fit(xtrain, ytrain, batch_size=1, epochs=7)
model.save('model.h5')
which normally works perfectly, but if I use the model elsewhere, I get this error:
ModuleNotFoundError: No module named 'tensorflow'
You do not need to use TensorFlow in production. You can use coefficient by replacing what random functions in your programming language.
Sample: Input array, time coefficients matrixes, and unboxed system inputs to output with feedback system in the box containers.
temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ]) #action = actions['up']
temp = tf.nn.softmax(temp)
action = int(np.argmin(temp))
I have an LSTM model to perform binary classification of human activities using multivariate smartphone sensor data. The two classes are imbalanced (1:50). Therefore I would like to use F1-score as a metric, but I saw that it was deprecated as a metric.
Before it was best practice to use a callback function for the metric to ensure it was applied on the whole dataset, however, recently the TensorFlow addons reintroduced the F1-Score.
I now have a problem to apply this score to my functional API. Here is the code I am currently running:
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow import kerasdef
create_model(n_neurons=150, learning_rate=0.01, activation="relu", loss="binary_crossentropy"):
#create input layer and assign to current output layer
input_ = keras.layers.Input(shape=(X_train.shape[1],X_train.shape[2]))
#add LSTM layer
lstm = keras.layers.LSTM(n_neurons, activation=activation)(input_)
#Output Layer
output = keras.layers.Dense(1, activation="sigmoid")(lstm)
#Create Model
model = keras.models.Model(inputs=[input_], outputs=[output])
#Add optimizer
optimizer=keras.optimizers.SGD(lr=learning_rate, clipvalue=0.5)
#Compile model
model.compile(loss=loss, optimizer=optimizer, metrics=[tfa.metrics.F1Score(num_classes=2, average="micro")])
print(model.summary())
return model
#Create the model
model = create_model()
#fit the model
history = model.fit(X_train,y_train,
epochs=300,
validation_data=(X_val, y_val))
If I use another value for the metric argument average (e.g., average=None or average="macro") then I get an error message when fitting the model:
ValueError: Dimension 0 in both shapes must be equal, but are 2 and 1. Shapes are [ 2 ] and [ 1 ]. for 'AssignAddVariableOp' (op: 'AssignAddVariableOp') with input shapes: [ ], [ 1 ].
And if I use the value average="micro" I am not getting the error, but the F1-score is 0 throughout the learning process, while my loss decreases.
I believe I am still doing something wrong here. Can anybody provide an explanation for me?
Updated answer: The crucial bit is to import tf.keras, not keras. Then you can use e.g. tf.keras.metrics.Precision or tfa.metrics.F1Score without problems. See also here.
Old answer:
The problem with tensorflow-addons is that the implementation of the current release (0.6.0) only counts exact matches, such that a comparison e.g. of 1 and 0.99 yields 0. Of course, this is practically useless in a neural network. This has been fixed in 0.7.0 (not yet released). You can install it as follows:
pip3 install --upgrade pip
pip3 install tfa-nightly
and then use a threshold (everything below the threshold is counted as 0, otherwise as 1):
tfa.metrics.FBetaScore(num_classes=2,average="micro",threshold=0.9)
See also https://github.com/tensorflow/addons/issues/490.
The problem with other values for average is discussed here: https://github.com/tensorflow/addons/issues/746.
Beware that there are two other problems that probably lead to useless results, see also https://github.com/tensorflow/addons/issues/818:
the model uses binary classification, but f1-score in tfa assumes categorical classification with one-hot encoding
f1-score is called at each batch step at validation.
These problems should not appear when using the Keras metrics.
In this example, I will show how to use Adamw optimizer and F1_score using Tensorflow.
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
import tensorflow_addons as tfa
f1 = tfa.metrics.F1Score(num_classes=2, average=None)
model=(..)
model.compile(
loss="binary_crossentropy",
optimizer=tfa.optimizers.AdamW(learning_rate=lr_schedule,weight_decay = 0.0001),
metrics=["acc",'AUC',f1],
)
I have written the below code for image captioning in keras & it works fine.
image_model = Sequential()
image_model.add(Dense(EMBEDDING_DIM, input_shape=(2048,), activation='relu'))
image_model.add(RepeatVector(max_length))
lang_model = Sequential()
lang_model.add(Embedding(vocab_size,EMBEDDING_DIM , input_length=max_length))
lang_model.add(Bidirectional(LSTM(256,return_sequences=True)))
lang_model.add(Dropout(0.5))
lang_model.add(BatchNormalization())
lang_model.add(TimeDistributed(Dense(EMBEDDING_DIM)))
fin_model = Sequential()
fin_model.add(Merge([image_model, lang_model], mode='concat'))
#model.add(Concatenate([image_model, lang_model]))
fin_model.add(Dropout(0.5))
fin_model.add(BatchNormalization())
fin_model.add(Bidirectional(LSTM(1000,return_sequences=False)))
fin_model.add(Dense(vocab_size))
fin_model.add(Activation('softmax'))
print ("Model created!")
fin_model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])
But I wanted to add attention mechanism here. No need of a bidirectional lstm, just a usual LSTM is also fine. But I don't see one useful blog that explains how to do this in keras. Since I am very new to deep learning & Keras is the only python library I know, any help is much appreciated.
I am currently working on vgg16 model with keras.
I fine tune vgg model with some of my layer.
After fitting my model (training), I save my model with model.save('name.h5').
It can be saved without problem.
However, when I try to reload the model with load_model function, it shows the error:
You are trying to load a weight file containing 17 layers into a model
with 0 layers
Did anyone meet this problem before?
My keras verion is 2.2.
Here is part of my code ...
from keras.models import load_model
vgg_model = VGG16(weights='imagenet',include_top=False,input_shape=(224,224,3))
global model_2
model_2 = Sequential()
for layer in vgg_model.layers:
model_2.add(layer)
for layer in model_2.layers:
layer.trainable= False
model_2.add(Flatten())
model_2.add(Dense(128, activation='relu'))
model_2.add(Dropout(0.5))
model_2.add(Dense(2, activation='softmax'))
model_2.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
model_2.fit(x=X_train,y=y_train,batch_size=32,epochs=30,verbose=2)
model_2.save('name.h5')
del model_2
model_2 = load_model('name.h5')
Actually I do not delete the model and then load_model immediately,
just for showing my problem.
It seems that this problem is related with the input_shape parameter of the first layer. I had this problem with a wrapper layer (Bidirectional) which did not have an input_shape parameter set. In code:
model.add(Bidirectional(LSTM(units=units, input_shape=(None, feature_size)), merge_mode='concat'))
did not work for loading my old model because the input_shape is only defined for the LSTM layer not the outer one. Instead
model.add(Bidirectional(LSTM(units=units), input_shape=(None, feature_size), merge_mode='concat'))
worked because the wrapper Birectional layer now has an input_shape parameter. Maybe you should check if the VGG net input_shape parameter is set or not or you should add a single input_layer to your model with the correct input_shape parameter.
I spent 6 hours looking around for a solution.. to apply me trained model.
finally i tried VGG16 as model and using h5 weights i´ve trained on my own and Great!
weights_model='C:/Anaconda/weightsnew2.h5' # my already trained weights .h5
vgg=applications.vgg16.VGG16()
cnn=Sequential()
for capa in vgg.layers:
cnn.add(capa)
cnn.layers.pop()
for layer in cnn.layers:
layer.trainable=False
cnn.add(Dense(2,activation='softmax'))
cnn.load_weights(weights_model)
def predict(file):
x = load_img(file, target_size=(longitud, altura))
x = img_to_array(x)
x = np.expand_dims(x, axis=0)
array = cnn.predict(x)
result = array[0]
respuesta = np.argmax(result)
if respuesta == 0:
print("Gato")
elif respuesta == 1:
print("Perro")
In case anyone is still wondering about this error:
I had the same Problem and spent days figuring out, whats causing it. I have a copy of my whole code and dataset on another system on which it worked. I noticed that it is something about the training, because without training my model, saving and loading was no problem.
The only difference between my systems was, that I was using tensorflow-gpu on my main system and for this reason, the tensorflow base version was a little bit lower (1.14.0 instead of 2.2.0). So all I had to do was using
model.fit_generator()
instead of
model.fit()
before saving it. And it works
I read this very helpful Keras tutorial on transfer learning here:
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
I am thinking that this is probably very applicable to the fish data here, and started going down that route. I tried to follow the tutorial as much as I could. The code is a mess as I was just tyring to figure out how everything works, but it can be found here:
https://github.com/MrChristophRivera/ClassifiyingFish/blob/master/notebooks/Anthony/Resnet50%2BTransfer%20Learning%20Attempt.ipynb
For brevity, here are the steps I did here:
model = ResNet50(top_layer = False, weights="imagenet"
# I would resize the image to that of the standard input size of ResNet50.
datagen=ImageDataGenerator(1./255)
generator = datagen.flow_from_directory(
train_data_dir,
target_size=(img_width, img_height),
batch_size=32,
class_mode=None,
shuffle=False)
# predict on the training data
bottleneck_features_train = model.predict_generator(generator,
nb_train_samples)
print(bottleneck_features_train)
file_name = join(save_directory, 'tbottleneck_features_train.npy')
np.save(open(file_name, 'wb'), bottleneck_features_train)
# Then I would use this output to feed my top layer and train it. Let's
say I defined
# it like so:
top_model = Sequential()
# Skipping some layers for brevity
top_model.add(Dense(8, activation='relu')
top_model.fit(train_data, train_labels)
top_model.save_weights(top_model_weights_path).
At this time, I have the weights saved. The next step would be to add the top layer to ResNet50. The tutorial simply did it like so:
# VGG16 model defined via Sequential is called bottom_model.
bottom_model.add(top_model)
The problem is when I try to do that this fails because "model does not have property add". My guess is that ResNet50 was defined in a different way. At any rate, my question is: How can I add this top model with the loaded weights to the bottom model? Can anyone give helpful pointers?
Try:
input_to_model = Input(shape=shape_of_your_image)
base_model = model(input_to_model)
top_model = Flatten()(base_model)
top_model = Dense(8, activation='relu')
...
Your problem comes from the fact that Resnet50 is defined in a so called functional API. I would also advise you to use different activation function because having relu as an output activation might cause problems. Moreover - your model is not compiled.