I have successfully built several model based on mobileNet using keras. I noticed that MobileNet_V2 as been added in Keras 2.2.0, but I could not manage to make it work :
from keras.applications.mobilenet_v2 import mobilenet_v2
base_model = mobilenet_v2.MobileNetV2(weights='imagenet', include_top=False)
I get the following error : AttributeError: 'NoneType' object has no attribute 'image_data_format' on this line from mobilenet_v2.py data_format=backend.image_data_format()
It seems to me that backendhas a definition problem... I am using Tensorflow backend, maybe it does not work with this one ?
The problem is from the import. The proper way to do this is to do the following :
from keras.applications import MobileNetV2
m = MobileNetV2(weights='imagenet', include_top=False)
Related
I used Google colab to train a Keras model
import tensorflow as tf
import keras_tuner as kt
model = tf.keras.Sequential()
hidden_layers = hp.Int('num_layers', min_value=1, max_value=3)
model.add(tf.keras.Input(shape=(20,)))
for i in range(hidden_layers):
units = hp.Int('units_at_hidden_layer_' + str(i), min_value=17, max_value=40)
model.add(tf.keras.layers.Dense(units, activation='selu', kernel_initializer='lecun_normal'))
model.add(tf.keras.layers.Dense(3, activation='softmax'))
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy', 'loss', 'recall'])
Then I exported the model to a format which should have been readable by tensorflowjs in Node.
import tensorflowjs as tfjs
tfjs.converters.save_keras_model(model, "/path/to/output"))
When I tried to load the model in a Node.js program
import * as tf from '#tensorflow/tfjs';
import '#tensorflow/tfjs-node';
import * as fs from 'fs';
import * as path from 'path';
(async function() {
const model = await tf.loadLayersModel("file:///path/to/output/model.json");
})();
I ended up with the following error:
Error: Unknown initializer: LecunNormal. This may be due to one of the following reasons:
1. The initializer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2. The custom initializer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().
But I can see that lecun normal is defined here https://github.com/tensorflow/tfjs/blob/master/tfjs-layers/src/exports_initializers.ts#L183-L185
export function leCunNormal(args: SeedOnlyInitializerArgs): Initializer {
return new LeCunNormal(args);
}
What can I do to ensure my model is loaded in Nodejs?
I don't need to train in Node at all, just load the weights which were already trained. I saw on another stackoverflow post someone got the load to work by changing the initializer to Zeros but I'm not sure how to do that. Ideally there would be a way for tensorflowjs to recognize the LecunNormal initializer in the model.json
i just try to implement my trained model with tensorflow2 in pycharm ,i use :
ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8 .
this is the code I use :
# OBJECT DETECTOR
import cv2 # OpenCV
import matplotlib.pyplot as plt
configfile = "label_map.pbtxt" # Config File for Trained MobileNet SSD Model
frozen_model = "export/export/saved_model/saved_model.pb" # Weighhts - derived from tensor flow
model = cv2.dnn_DetectionModel(frozen_model, configfile)
and i got this error :
model = cv2.dnn_DetectionModel(frozen_model, configfile) # Model usedto Detect Objects SystemError: <class 'cv2.dnn_DetectionModel'>
returned a result with an error set.
Please if you get this error before tell me how to solve it.
I think you want to load the model using the .pb and the .pbtxt config files?
You need to use the correct function to load the model:
model = cv2.dnn.readNetFromTensorflow(frozen_model, configfile)
I used this code to load weights
from transformers import DebertaTokenizer, DebertaModel
import torch
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base')
model = DebertaModel.from_pretrained('microsoft/deberta-base')
after that i want to optimize and use loss function using compile function
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=tf.metrics.SparseCategoricalAccuracy(),
)
I got this error
AttributeError: 'DebertaModel' object has no attribute 'compile'
the only way to work on it using pytorch library
The transformers library offers a tensorflow-based model TFDebertaModel.
Replace DebertaModel with TFDebertaModel and the execution of compile() works.
I changed your snippet to the following and compile works. Try this:
from transformers import DebertaTokenizer, TFDebertaModel
Tested with transformers version: 4.19.2
https://huggingface.co/docs/transformers/v4.19.2/en/model_doc/deberta#transformers.TFDebertaModel
In line where I try to add LSTM layer to my Sequential model from tensorflow.keras in this way:
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, GRU, TimeDistributed
#...
modelX1 = Sequential()
modelX1.add(LSTM(LSTM_neurons[0],input_shape=(dataX1.shape[1],dataX1.shape[2]), activation='tanh', recurrent_activation='tanh', use_bias=True, return_sequences = True))
This error appears:
File "C:\Users\mroso\Anaconda3\lib\site-packages\tensorflow_core\python\ops\resource_variable_ops.py", line 168, in _variable_handle_from_shape_and_dtype handle_data.shape_and_type.append(
AttributeError: 'google.protobuf.pyext._message.RepeatedCompositeCo' object has no attribute 'append'
I am using Tensorflow 2.1.0, protobuf 3.11.4, Spyder 4 and I am working on Windows 10.
When I was using Tensorflow 1.4 (90% sure, that was this version) everything was all right.
Does anyone know how to fix this error?
Thank you in advance for your help.
You might want to look at the solution stated at this link
As the solution suggests, you can try rolling back your TensorFlow version to 2.0.0
fashion_model.compile(
loss = keras.losses.categorical_crossentropy,
optimizer = tf.keras.optimizers.Adam(),
metrics = ['accuracy']
)
When I execute this line of code I am facing the error
module 'tensorflow' has no attribute 'log'
and my tensorflow version is 2.0
substitute tf.math.log for tf.log in tf 2.0.
If you know the exact line where tf.log is, replace it with tf.math.log.
if not, you can use this guide to Automatically upgrade code to TensorFlow 2
loss = tf.keras.losses.categorical_crossentropy
I also faced a similar issue then i called tensorflow with each Keras object and it resolved it
It is due to TensorFlow update.
Just do this:
from tensorflow import keras
Then run your code