I am really new to Keras so forgive me if my query is a bit silly. I installed Keras in my system using the default methods and it works fine. I want to add a new optimizer to Keras so that I can easily mention "optimizer = mynewone " under the model.compile function. How do I go about changing the " optimizer.py " code in Keras and ensuring that the change is reflected on my Keras environment. Here is what I tried:
Suppose I change the optimizer name from rmsprop to rmsprops in the code I get the following error:
model.compile(loss='binary_crossentropy', optimizer='rmsprops', metrics= ['accuracy'])
Traceback (most recent call last):
File "<ipython-input-33-40773d534448>", line 1, in <module>
model.compile(loss='binary_crossentropy', optimizer='rmsprops', metrics=['accuracy'])
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/models.py", line 589, in compile
**kwargs)
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/engine/training.py", line 469, in compile
self.optimizer = optimizers.get(optimizer)
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/optimizers.py", line 614, in get
# Instantiate a Keras optimizer
File "/home/kiran/anaconda/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 16, in get_from_module
str(identifier))
ValueError: Invalid optimizer: rmsprops
Then when I click on optimizers.py I get the code developed by Keras in my environment. After that in the code I replaced all "rmsprop" keywords with "rmsprops" and saved the file. So I thought I must have the updated optimizers.py in my system. But when I go back to my original file and run model.compile it throws the same error.
Any help would be really appreciated. Thanks in advance.
I think your approach is complicated and it doesn't have to be. Let's say you implement your own optimizer by subclassing keras.optimizers.Optimizer:
class MyOptimizer(Optimizer):
optimizer functions here.
Then to instantiate it in your model you can do this:
myOpt = MyOptimizer()
model.compile(loss='binary_crossentropy', optimizer=myOpt, metrics= ['accuracy'])
Just pass an instance of your optimizer as the optimizer parameter of model.compile and that's it, Keras will now use your optimizer.
Are you sure that it is a new optimizer that you want? Not a custom objective function? Objectives can be custom it's easy to define, optimizers are trickier.
There is already a huge number of optimizers with a lot of parameters. However if you really want to go down that road I would advise you to go to tensorflow!
Then you will be able to use this in Keras
It's all I can do for you, but maybe there is another way that I don't know of.
Related
Hello there guys. First time making a post here.
So I am trying to make an AI chatbot using Python, in a Pycharm IDLE. While trying to start training the neural network that would enable the chatbot to work, I ran into this error and was unable to find any resources I could use to help me solve this:
Traceback (most recent call last):
File "C:\Users\Owner\PycharmProjects\ai\main.py", line 71, in
model.compile(loss='categorical_crossentropy', optimiser=sgd, metrics=['accuracy'])
File "C:\Users\Owner\AppData\Roaming\Python\Python39\site-packages\keras\utils\traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:\Users\Owner\AppData\Roaming\Python\Python39\site-packages\keras\engine\training.py", line 2983, in _validate_compile
raise TypeError('Invalid keyword argument(s) in `compile()`: '
TypeError: Invalid keyword argument(s) in `compile()`: ({'optimiser'},). Valid keyword arguments include "cloning", "experimental_run_tf_function", "distribute", "target_tensors", or "sample_weight_mode".
Original Code
ie the code where the error occured
sgd = gradient_descent_v2.SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimiser=sgd, metrics=['accuracy'])
I also had to import SGD like this because otherwise it couldn't be found:
from keras.optimizers import gradient_descent_v2
If anybody knows how to solve this, please tell me!
This is my first ever post too.
The problem is:
model.compile(loss='categorical_crossentropy', optimiser=sgd, metrics=['accuracy'])
The order of arguments should be changed. And the sgd should be included in single quotes. Also, the optimiser should be optimizer with a z.
So it should look like this:
model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
Also, in regard to importing, try this:
from tensorflow.keras.optimizers import SGD
And make sure you imported the categorical_crossentropy before.
A helpful document from keras website:
https://www.tensorflow.org/api_docs/python/tf/keras/Model. And here you could find 30 examples that might be helpful: https://www.programcreek.com/python/example/97109/keras.losses.categorical_crossentropy.
Good luck with your project!
In Tensorflow 2.0, the order of arguments doesn't matter. And, it doesn't have to be in single quotes either. The only problem above seems to be 's' instead of 'z' in 'optimizer'.
I'm using Keras (with TensorFlow back-end) to implement a neural network and want to only save the model that minimises loss on the validation set during training. To do this, I instantiated a ModelCheckpoint and pass it when calling the fit method of the model. However, when I do this I get the following error: "AttributeError: 'ModelCheckpoint' object has no attribute '_implements_train_batch_hooks'". The closest thing I have found online for my problem is this post with a similar error, where the problem came from mixing modules from keras and tf.keras, however this is not my case as all my modules are imported from keras. I've been looking online and through the Keras documentation for a while and can't find anything that could explain this bug. Here are the parts of the code that seem the most relevant to the issue:
Imported modules:
from keras.models import Sequential
from keras.layers import Embedding, Conv1D, Dense, Dropout, GlobalMaxPool1D, Concatenate
from keras.callbacks import ModelCheckpoint
ModelCheckpoint instantiation, model compilation and call to fit method:
checkpoint = ModelCheckpoint('../model_best.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(x_train, y_train,
epochs = 10, batch_size = 64,
validation_data = (x_val, y_val),
callbacks = [checkpoint])
...and here is the full Traceback:
Traceback (most recent call last):
File "/Users/thisuser/thisrepo/classifier.py", line 39, in <module>
callbacks = [checkpoint])
File "/Users/thisuser/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 66, in _method_wrapper
return method(self, *args, **kwargs)
File "/Users/thisuser/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py", line 826, in fit
steps=data_handler.inferred_steps)
File "/Users/thisuser/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py", line 231, in __init__
cb._implements_train_batch_hooks() for cb in self.callbacks)
File "/Users/thisuser/anaconda3/envs/tf/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py", line 231, in <genexpr>
cb._implements_train_batch_hooks() for cb in self.callbacks)
AttributeError: 'ModelCheckpoint' object has no attribute '_implements_train_batch_hooks'
The versions I'm using are:
Python: 3.7.7
Keras: 2.3.0-tf
Does anyone know what might be causing the issue? If needed I can modify my code slightly to give it all here, so that it is reproducible. Thanks in advance for your help!
I also encountered this problem recently.
What did I find out: recently the keras or tensorflow version was updated by developers and this cause the problem.
Solution: since the developers of keras require everyone to switch to the version tf.keras, you need to replace of your code import section.
From:
import keras
To:
import tensorflow.keras as keras
After that everything worked for me.
replace:
from keras.callbacks import ModelCheckpoint
to:
from tensorflow.keras.callbacks import ModelCheckpoint
I am training a model with keras and am getting an error in callback in fit_generator function. I always run to epoch 3rd and get this error
annotation_path = 'train2.txt'
log_dir = 'logs/000/'
classes_path = 'model_data/deplao_classes.txt'
anchors_path = 'model_data/yolo_anchors.txt'
class_names = get_classes(classes_path)
num_classes = len(class_names)
anchors = get_anchors(anchors_path)
input_shape = (416,416) # multiple of 32, hw
is_tiny_version = len(anchors)==6 # default setting
if is_tiny_version:
model = create_tiny_model(input_shape, anchors, num_classes,
freeze_body=2, weights_path='model_data/tiny_yolo_weights.h5')
else:
model = create_model(input_shape, anchors, num_classes,
freeze_body=2, weights_path='model_data/yolo_weights.h5') # make sure you know what you freeze
logging = TensorBoard(log_dir=log_dir)
checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=3, verbose=1)
early_stopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=10, verbose=1)
[error]
Traceback (most recent call last):
File "train.py", line 194, in <module>
_main()
File "train.py", line 69, in _main
callbacks=[logging, checkpoint])
File "C:\Users\ilove\AppData\Roaming\Python\Python37\lib\site-packages\keras\legacy\interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "C:\Users\ilove\AppData\Roaming\Python\Python37\lib\site-packages\keras\engine\training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "C:\Users\ilove\AppData\Roaming\Python\Python37\lib\site-packages\keras\engine\training_generator.py", line 251, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
File "C:\Users\ilove\AppData\Roaming\Python\Python37\lib\site-packages\keras\callbacks.py", line 79, in on_epoch_end
callback.on_epoch_end(epoch, logs)
File "C:\Users\ilove\AppData\Roaming\Python\Python37\lib\site-packages\keras\callbacks.py", line 429, in on_epoch_end
filepath = self.filepath.format(epoch=epoch + 1, **logs)
KeyError: 'val_loss'
can anyone find out problem to help me?
Thanks in advance for your help.
This callback runs at the end of iteration 3.
checkpoint = ModelCheckpoint(log_dir + 'ep{epoch:03d}-loss{loss:.3f}-val_loss{val_loss:.3f}.h5',
monitor='val_loss', save_weights_only=True, save_best_only=True, period=3)
The error message is claiming that there is no val_loss in the logs variable when executing:
filepath = self.filepath.format(epoch=epoch + 1, **logs)
This would happen if fit is called without validation_data.
I would start by simplifying the path name for model checkpoint. It is probably enough to include the epoch in the name.
This answer doesn't apply to the question, but this was at the top of the Google results for keras "KeyError: 'val_loss'" so I'm going to share the solution for my problem.
The error was the same for me: when using val_loss in the checkpoint file name, I would get the following error: KeyError: 'val_loss'. My checkpointer was also monitoring this field, so even if I took the field out of the file name, I would still get this warning from the checkpointer: WARNING:tensorflow:Can save best model only with val_loss available, skipping.
In my case, the issue was that I was upgrading from using Keras and Tensorflow 1 separately to using the Keras that came with Tensorflow 2. The period param for ModelCheckpoint had been replaced with save_freq. I erroneously assumed that save_freq behaved the same way, so I set it to save_freq=1 thinking this would save it every epic. However, the docs state:
save_freq: 'epoch' or integer. When using 'epoch', the callback saves the model after each epoch. When using integer, the callback saves the model at end of a batch at which this many samples have been seen since last saving. Note that if the saving isn't aligned to epochs, the monitored metric may potentially be less reliable (it could reflect as little as 1 batch, since the metrics get reset every epoch). Defaults to 'epoch'
Setting save_freq='epoch' solved the issue for me. Note: the OP was still using period=1 so this is definitely not what was causing their problem
Use val_accuracy in the filepath and checkpoint. If it still doesn't improve just restart the pc or colab.
this error happens when we are not providing validation data to the model,
And check the parameters of the model.fit_generator(or model.fit)(train_data, steps_per_epoch,validation_data, validation_steps, epochs,initial_epoch, callbacks)
For me the problem was that I was trying to set the initial_epoch (in model.fit) to a value other than the standard 0. I was doing so because I'm running model.fit in a loop that runs 10 epochs each cycle, then retrieves history data, checks if loss has decreased and runs model.fit again until it's satisfied.
I thought I had to update the value as I was restarting the previous model but apparently no...
switch = True
epoch = 0
wait = 0
previous = 10E+10
while switch:
history = model.fit( X, y, batch_size=1, epochs=step, verbose=False )
epoch += step
current = history.history["loss"][-1]
if current >= previous:
wait += 1
if wait >= tolerance:
switch = False
else:
wait = 0
if epoch >= max_epochs:
switch = False
previous = current
In my case, the val_generator was broken when colab notebook try to read the images from google drive. So i run the cell create val_generator again and it worked
I had this error and didn't manage to find the cause of the bug anywhere online.
What was happening in my case was that I was asking for more training samples than I actually had. TF didn't give me an explicit error for that and it even provided me with a saved value for the loss. I only received the esoteric KeyError: "val_loss" when trying to save that.
Hope this helps someone sniff out their silly bug if that's whats happening to them.
I do not know if this will work in all cases. But, for me I restarted my computer and it seemed to work.
I'm trying to train a LSTM neural net using Keras (version 2.2.0) and TensorFlow (version 1.1.0). I know that there are more recent TensorFlow versions but unfortunately I'm having some issues installing them. However, I don't believe that my problem is related to the TensorFlow version.
This is how my Keras code looks like:
[...] from keras.layers import Dense, Dropout, LeakyReLU, LSTM, Activation, Dense, Dropout, Input, Embedding
def LSTM(X,Y):
inputDimension = len(X[0])
inputSize = len(X)
# create the model
model = Sequential()
model.add(Embedding(input_length=inputDimension,input_dim=inputDimension,output_dim=256))
model.add(LSTM(100))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X,Y,epochs=3,batch_size=64)
return model
Whenever I try to run it, I end up with the following error message:
Traceback (most recent call last):
File "Main.py", line 208, in lstmModel = ann.LSTM(scaledTrainingX,trainingY)
File "ann.py", line 158, in LSTM model.add(LSTM(100))
TypeError: LSTM() missing 1 required positional argument: 'Y'
I found this question on StackOverflow but the solution suggested there doesn't help because I'm not using a generator to train my network.
Any help to get this network to run would be highly appreciated. Thank you very much.
The function LSTM(X,Y) in which you create your model is shadowing the Keras LSTM layer. So when you call:
model.add(LSTM(100))
you're indeed calling the function that you defined. You need to rename this function to something else.
I got this error when I tried to modify the learning rate parameter of SGD optimizer in Keras. Did I miss something in my code or my Keras was not installed properly?
Here is my code:
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D, Activation
import keras
from keras.optimizers import SGD
model = Sequential()
model.add(Dense(64, kernel_initializer='uniform', input_shape=(10,)))
model.add(Activation('softmax'))
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics= ['accuracy'])*
and here is the error message:
Traceback (most recent call last): File
"C:\TensorFlow\Keras\ResNet-50\test_sgd.py", line 10, in
model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.01), metrics=['accuracy']) File
"C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\models.py",
line 787, in compile
**kwargs) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\engine\training.py",
line 632, in compile
self.optimizer = optimizers.get(optimizer) File "C:\Users\nsugiant\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\keras_impl\keras\optimizers.py",
line 788, in get
raise ValueError('Could not interpret optimizer identifier:', identifier) ValueError: ('Could not interpret optimizer identifier:',
<keras.optimizers.SGD object at 0x000002039B152FD0>)
The reason is you are using tensorflow.python.keras API for model and layers and keras.optimizers for SGD. They are two different Keras versions of TensorFlow and pure Keras. They could not work together. You have to change everything to one version. Then it should work.
I am bit late here, Your issue is you have mixed Tensorflow keras and keras API in your code. The optimizer and the model should come from same layer definition. Use Keras API for everything as below:
from keras.models import Sequential
from keras.layers import Dense, Dropout, LSTM, BatchNormalization
from keras.callbacks import TensorBoard
from keras.callbacks import ModelCheckpoint
from keras.optimizers import adam
# Set Model
model = Sequential()
model.add(LSTM(128, input_shape=(train_x.shape[1:]), return_sequences=True))
model.add(Dropout(0.2))
model.add(BatchNormalization())
# Set Optimizer
opt = adam(lr=0.001, decay=1e-6)
# Compile model
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
I have used adam in this example. Please use your relevant optimizer as per above code.
Hope this helps.
This problem is mainly caused due to different versions. The tensorflow.keras version may not be same as the keras.
Thus causing the error as mentioned by #Priyanka.
For me, whenever this error arises, I pass in the name of the optimizer as a string, and the backend figures it out.
For example instead of
tf.keras.optimizers.Adam
or
keras.optimizers.Adam
I do
model.compile(optimizer= 'adam' , loss= keras.losses.binary_crossentropy, metrics=['accuracy'])
from tensorflow.keras.optimizers import SGD
This works well.
Since Tensorflow 2.0, there is a new API available directly via tensorflow:
https://www.pyimagesearch.com/2019/10/21/keras-vs-tf-keras-whats-the-difference-in-tensorflow-2-0/
Solution works for tensorflow==2.2.0rc2, Keras==2.2.4 (on Win10)
Please also note that the version above uses learning_rate as parameter and no longer lr.
For some libraries (e.g. keras_radam) you'll need to set up an environment variable before the import:
import os
os.environ['TF_KERAS'] = '1'
import tensorflow
import your_library
In my case it was because I missed the parentheses. I am using tensorflow_addons so my code was like
model.compile(optimizer=tfa.optimizers.LAMB, loss='binary_crossentropy',
metrics=['binary_accuracy'])
And it gives
ValueError: ('Could not interpret optimizer identifier:', <class tensorflow_addons.optimizers.lamb.LAMB'>)
Then I changed my code into:
model.compile(optimizer=tfa.optimizers.LAMB(), loss='binary_crossentropy',
metrics=['binary_accuracy'])
and it works.
recently, in the latest update of Keras API 2.5.0 , importing Adam optimizer shows the following error:
from keras.optimizers import Adam
ImportError: cannot import name 'Adam' from 'keras.optimizers'
instead use the following for importing optimizers (i.e. Adam) :
from keras.optimizers import adam_v2
optimizer = adam_v2.Adam(learning_rate=lr, decay=lr/epochs)
Model.compile(loss='--', optimizer=optimizer , metrics=['--'])
Running the Keras documentaion example https://keras.io/examples/cifar10_cnn/
and installing the latest keras and tensor flow versions
(at the time of this writing
tensorflow 2.0.0a0 and Keras version 2.2.4 )
I had to import explicitly the optimizer the keras the example is using,specifically the line on top of the example :
opt = tensorflow.keras.optimizers.rmsprop(lr=0.0001, decay=1e-6)
was replaced by
from tensorflow.keras.optimizers import RMSprop
opt = RMSprop(lr=0.0001, decay=1e-6)
In the recent version the api "broke" and keras.stuff in a lot of cases became tensorflow.keras.stuff.
Use one style in one kernel, try not to mix
from keras.optimizers import sth
with
from tensorflow.keras.optimizers import sth
I tried the following and it worked for me:
from keras import optimizers
sgd = optimizers.SGD(lr=0.01)
model.compile(loss='mean_squared_error', optimizer=sgd)
use
from tensorflow.keras import optimizers
instead of
from keras import optimizers
Try changing your import lines to
from keras.models import Sequential
from keras.layers import Dense, ...
Your imports seem a little strange to me. Maybe you could elaborate more on that.
I have misplaced parenthesis and got this error,
Initially it was
x=Conv2D(filters[0],(3,3),use_bias=False,padding="same",kernel_regularizer=l2(reg),x))
The corrected version was
x=Conv2D(filters[0],(3,3),use_bias=False,padding="same",kernel_regularizer=l2(reg))(x)
Just give
optimizer = 'sgd' / 'RMSprop'
I got the same error message and resolved this issue, in my case, by replacing the assignment of optimizer:
optimizer=keras.optimizers.Adam
with its instance instead of the class itself:
optimizer=keras.optimizers.Adam()
I tried everything in this thread to fix it but they didn't work. However, I managed to fix it for me. For me, the issue was that calling the optimizer class, ie. tensorflow.keras.optimizers.Adam caused the error, but calling the optimizer as a function, ie. tensorflow.keras.optimizers.Adam() worked. So my code looks like:
model.compile(
loss=tensorflow.keras.losses.categorical_crossentropy(),
optimizer=tensorflow.keras.optimizers.Adam()
)
Looking at the tensorflow github, I am not the only one with this error where calling the function rather than the class fixed the error.