Okay so I have a keras model that I fully ran and then saved the weights with this line:
model.save_weights("rho_beta_true_tf", save_format="tf")
Then in another file I build just the model and then I load the weights from the model I ran above using this line:
model_build.load_weights("rho_beta_true_tf")
When I then go to call some of the attributes everything displays correctly except when I try to run this line:
model_build.stimuli.embeddings
or
model_build.stimuli.embeddings.numpy()[0]
I get an attribute error saying:
AttributeError: 'Embedding' object has no attribute 'embeddings'
This line is supposed to return a tensor and if I call any other attributes so far it works so I am not sure if it just can't find the tensors or if the problem is something else. Could someone please help me figure out how to solve this attribute Error?
Try using .get_weights():
model_build.stimuli.get_weights()
Turns out that because I had saved the weights in tf format I had to follow this step in the tensor flow documentation:
For user-defined classes which inherit from tf.keras.Model, Layer instances must be assigned to object attributes, typically in the constructor.
So then the line
build_model.stimuli.embedding(put the directory path to your custom embedding layer here)
worked!
Related
I'm trying LGBM's Dask API and when I fit DaskLGBMClassifier I get the following error:
'Future' object has no attribute 'get_params'
I tried to deug it working on the original code. The variable model that you can see in the error reference that Colab should be an instance of the class LGBMModel, which seems to have the method get_params().
Why does it say that get_params is an attribute? What is a 'Future' object?
this is the image of the error image of error , this the model=_train, this the _train function, this the LGBMModel adn finally the get param function.
while the the code is compiling, it show me this error
RuntimeError: Cannot clone object <tensorflow.python.keras.wrappers.scikit_learn.KerasRegressor object at 0x7fcc84142e50>, as the constructor either does not set or modifies parameter n_neurons
I’m trying to register a forward hook function to the last conv layer of my network. I first printed out the names of the modules via:
for name, _ in model.named_modules():
print(name)
Which gave me "0.conv" as the module name. However, when I tried to do the following, the above error was triggered by line 4:
def hook_feature(module, in_, out_):
features.append(out_.cpu().data.numpy())
model._modules.get("0.conv").register_forward_hook(hook_feature)
Here is my named_modules() output:
...
0.decoder1.dec1conv2
0.decoder1.dec1norm2
0.decoder1.dec1relu2
0.conv
1
1.outc1
1.outc2
What am I doing wrong and how do I fix it? Thanks!
Your modules are stored in an hierarchical way. To get to '0.conv', you need to
models._modules["0"]._modules.get("conv").register_forward_hook(hook_feature)
I have tried the keras nmt code in the following link:https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb
But when I tried to save the model, I get a NotImplementedError:
File "m.py", line 310, in <module>
main()
File "m.py", line 244, in main
encoder.save('/home/zzj/temp/encoder.h5')
File "/home/zzj/tensorflow/lib/python3.5/site-packages/tensorflow/python/keras/engine/network.py", line 1218, in save
raise NotImplementedError
The Encoder,Decoder subclassed the tf.keras.Model, and tf.keras.Model is a subclass of Network. After reading the code in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/engine/network.py
I found that these two class's _is_graph_network became False. I tried to set the flag to be true but get another error. So how can I save the model the author defined in the code?
I had a similar problem with the current tf version (1.11). I used the tf.keras API to define my model and trained it without problems. When I wanted to save my model using tensorflow.keras.models.save_model or model.save()(which just calls save_model) I got the following exception:
NotImplementedError: __deepcopy__() is only available when eager execution is enabled.
So I called tf.enable_eager_execution(), but because of the usage of a Lambda Layer in my architecture, I ended up with another NotImplementedError of "compute_output_shape".. If your architecture does not contain a Lambda Layer the enabling of eager_execution could fix your problem in tf 1.11.
My final "way to go" was to use model.save_weights('model_weights.h5') because I did not need to save the model architecture just the trained weights.
Btw.: in my case it was also possible to switch from tensorflow.keras.* imports to keras.* and use just "plain" keras with tf backend (model.save() works here - of course).
model.save('model.h5py') may solve the problem. The key is to save as h5py file.
Consider using tf.keras.models.save_model() and load_model(), it may work.
I am using python2 and i'm trying to get the activations of hidden layer. I am using the following code which is giving me an error:
get_activations = theano.function([my_model.layers[0].input], my_model.layers[0].get_output(train=False),
allow_input_downcast=True)
When I run the code it says:
AttributeError: 'Dense' object has no attribute 'get_output'
I have tried to use my_model.layers[0].output which also does not work correctly.
What should I do to get the activations from a layer given?
attribute get_output is only defined for old versions of keras (0.3). It no longer exists in version 1.0.
see new syntax (keras doc FAQ)
something like
get_activations = K.function([model.layers[0].input], [model.layers[1].output])
should work since the hidden layer is the second layer in your model (i.e. model.layers[1])