Keras concatenate is not defined when loading the model - python

In one of my lambda layers, I used from keras.layers import concatenate to concatenate two tensors and it worked without any problem during training and I successfully saved the model files.
However, when I'm loading the model, it throws me this error:
NameError: name 'concatenate' is not defined
Does anyone know what might be wrong? I've imported concatenate before I load the model.
The lambda layer looks like this:
def concat_l1_l2(vests):
l1, l2 = vests
l1 = K.l2_normalize(l1, axis=-1)
l2 = K.l2_normalize(l2, axis=-1)
return concatenate([l1, l2])

I had the same issue when loading a model from a json file, try the following line (it worked for me):
from keras.layers import concatenate
model_from_json(model_file, custom_objects={'concatenate': concatenate})

Maybe the following will solve your problem.
Try to pass your costum function to the load function of keras, i.e.
load(model_path,{"concat_l1_l2":concat_l1_l2})

Related

Attribute Error: 'Embedding' object has no attribute 'embeddings' - TensorFlow & Keras

Okay so I have a keras model that I fully ran and then saved the weights with this line:
model.save_weights("rho_beta_true_tf", save_format="tf")
Then in another file I build just the model and then I load the weights from the model I ran above using this line:
model_build.load_weights("rho_beta_true_tf")
When I then go to call some of the attributes everything displays correctly except when I try to run this line:
model_build.stimuli.embeddings
or
model_build.stimuli.embeddings.numpy()[0]
I get an attribute error saying:
AttributeError: 'Embedding' object has no attribute 'embeddings'
This line is supposed to return a tensor and if I call any other attributes so far it works so I am not sure if it just can't find the tensors or if the problem is something else. Could someone please help me figure out how to solve this attribute Error?
Try using .get_weights():
model_build.stimuli.get_weights()
Turns out that because I had saved the weights in tf format I had to follow this step in the tensor flow documentation:
For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor.
So then the line
build_model.stimuli.embedding(put the directory path to your custom embedding layer here)
worked!

Tensorflow Data Adapter Error: ValueError: Failed to find data adapter that can handle input

While running a sentdex tutorial script of a cryptocurrency RNN, link here
YouTube Tutorial: Cryptocurrency-predicting RNN Model,
but have encountered an error when attempting to train the model. My tensorflow version is 2.0.0 and I'm running python 3.6. When attempting to train the model I receive the following error:
File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 734, in fit
use_multiprocessing=use_multiprocessing)
File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 224, in fit
distribution_strategy=strategy)
File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\training_v2.py", line 497, in _process_training_inputs
adapter_cls = data_adapter.select_data_adapter(x, y)
File "C:\python36-64\lib\site-packages\tensorflow_core\python\keras\engine\data_adapter.py", line 628, in select_data_adapter
_type_name(x), _type_name(y)))
ValueError: Failed to find data adapter that can handle input: <class 'numpy.ndarray'>, (<class 'list'> containing values of types {"<class 'numpy.float64'>"})
Any advice would be greatly appreciated!
Have you checked whether your training/testing data and training/testing labels are all numpy arrays? It might be that you're mixing numpy arrays with lists.
You can avoid this error by converting your labels to arrays before calling model.fit():
train_x = np.asarray(train_x)
train_y = np.asarray(train_y)
validation_x = np.asarray(validation_x)
validation_y = np.asarray(validation_y)
If you encounter this problem while dealing with a custom generator inheriting from the keras.utils.Sequence class, you might have to make sure that you do not mix a Keras or a tensorflow - Keras-import.
This might especially happen when you have to switch to a previous tensorflow version for compatibility (like with cuDNN).
If you for example use this with a tensorflow-version > 2...
from keras.utils import Sequence
class generatorClass(Sequence):
def __init__(self, x_set, y_set, batch_size):
...
def __len__(self):
...
def __getitem__(self, idx):
return ...
... but you actually try to fit this generator in a tensorflow-version < 2, you have to make sure to import the Sequence-class from this version like:
keras = tf.compat.v1.keras
Sequence = keras.utils.Sequence
class generatorClass(Sequence):
...
I had a similar problem. In my case it was a problem that I was using a tf.keras.Sequential model but a keras generator.
Wrong:
from keras.preprocessing.sequence import TimeseriesGenerator
gen = TimeseriesGenerator(...)
Correct:
gen = tf.keras.preprocessing.sequence.TimeseriesGenerator(...)
This error occured when I updated tensorflow from 1.x to 2.x
It was solved after changing my import from
import keras
to
import tensorflow.keras as keras
For some reason I also experienced this problem when I passed my custom generator function directly to model.fit(), rather than creating an instance of it first.
I.e, given:
def batch_generator(...):
...
yield(...)
I called model.fit(batch_generator,...), rather than:
generator_instance = batch_generator(...)
model.fit(generator_instance, ...)
may be it will help someone.
First check your data type if it is numpy array & possibly ur algo required a DF.
print(X.shape, X.dtype)
print(y.shape, y.dtype)
convert your numpy array into Pandas DF
train_x = pd.DataFrame(train_x)
train_y = pd.DataFrame(train_y)

Using custom tensorflow ops in keras

I am having a script in tensorflow which contains the custom tensorflow ops. I want to port the code to keras and I am not sure how to call the custom ops within keras code.
I want to use tensorflow within keras, so the tutorial I found so far is describing the opposite to what I want: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html.
I also read about Lambda layers that can wrap arbitrary custom function, yet I did not see an example for tf.ops.
If you could provide code snippet with a simplest example how to do that I would be very grateful. For example assuming the tf.ops as:
outC = my_custom_op(inA, inB)
---EDIT:
Similar problem has been described in here - essentially calling this custom op in keras, however I cannot grasp the solution how to apply it on another example that I want, for instance this one. This custom tf op is first compiled (for gpu) and then so far used within tensorflow as here, see # line 40. It is clear for me how to use a custom (lambda) function wrapped in Lambda layer, what I would like to understand is how to use the compiled custom ops, if I use keras.
You can wrap arbitrary tensorflow functions in a keras Lambda layer and add them to your model. Minimal working example from this answer:
import tensorflow as tf
from keras.layers import Dense, Lambda, Input
from keras.models import Model
W = tf.random_normal(shape=(128,20))
b = tf.random_normal(shape=(20,))
inp = Input(shape=(10,))
x = Dense(128)(inp)
# Custom linear transformation
y = Lambda(lambda x: tf.matmul(x, W) + b, name='custom_layer')(x)
model = Model(inp, y)

keras model.save() raise NotImplementedError

I have tried the keras nmt code in the following link:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
But when I tried to save the model, I get a NotImplementedError:
File "m.py", line 310, in <module>
main()
File "m.py", line 244, in main
encoder.save('/home/zzj/temp/encoder.h5')
File "/home/zzj/tensorflow/lib/python3.5/site-packages/tensorflow/python/keras/engine/network.py", line 1218, in save
raise NotImplementedError
The Encoder,Decoder subclassed the tf.keras.Model, and tf.keras.Model is a subclass of Network. After reading the code in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/network.py
I found that these two class's _is_graph_network became False. I tried to set the flag to be true but get another error. So how can I save the model the author defined in the code?
I had a similar problem with the current tf version (1.11). I used the tf.keras API to define my model and trained it without problems. When I wanted to save my model using tensorflow.keras.models.save_model or model.save()(which just calls save_model) I got the following exception:
NotImplementedError: __deepcopy__() is only available when eager execution is enabled.
So I called tf.enable_eager_execution(), but because of the usage of a Lambda Layer in my architecture, I ended up with another NotImplementedError of "compute_output_shape".. If your architecture does not contain a Lambda Layer the enabling of eager_execution could fix your problem in tf 1.11.
My final "way to go" was to use model.save_weights('model_weights.h5') because I did not need to save the model architecture just the trained weights.
Btw.: in my case it was also possible to switch from tensorflow.keras.* imports to keras.* and use just "plain" keras with tf backend (model.save() works here - of course).
model.save('model.h5py') may solve the problem. The key is to save as h5py file.
Consider using tf.keras.models.save_model() and load_model(), it may work.

Unable to get hidden layer activation of ANN

I am using python2 and i'm trying to get the activations of hidden layer. I am using the following code which is giving me an error:
get_activations = theano.function([my_model.layers[0].input], my_model.layers[0].get_output(train=False),
allow_input_downcast=True)
When I run the code it says:
AttributeError: 'Dense' object has no attribute 'get_output'
I have tried to use my_model.layers[0].output which also does not work correctly.
What should I do to get the activations from a layer given?
attribute get_output is only defined for old versions of keras (0.3). It no longer exists in version 1.0.
see new syntax (keras doc FAQ)
something like
get_activations = K.function([model.layers[0].input], [model.layers[1].output])
should work since the hidden layer is the second layer in your model (i.e. model.layers[1])

Categories