I am trying out various Tensorflow models from hub but
I can't seem to get this one to work with KerasLayer:
"https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/3"
I am using the same procedure used within the examples in the documentation:
https://www.tensorflow.org/tutorials/images/hub_with_keras
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3))
even tried a few amendments such as including:
trainable=True, tags={"train"}),
so it would look like this:
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3), trainable=True, tags={"train"}))
because that's what it said to do in the docs.
However, I am still getting this error:
ValueError: Importing a SavedModel with tf.saved_model.load requires a 'tags=' argument if there is more than one MetaGraph. Got 'tags=None', but there are 2 MetaGraphs in the SavedModel with tag sets [[], ['train']]. Pass a 'tags=' argument to load this SavedModel
import tensorflow as tf
import tensorflow_hub as hub
import time
import numpy as np
import matplotlib.pylab as plt
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
from tensorflow.keras import layers
from tensorflow.keras.applications import imagenet_utils
from tensorflow.keras.callbacks import ModelCheckpoint
tf.compat.v1.disable_eager_execution()
URL = "https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/3"
module = hub.Module(URL)
height, width = hub.get_expected_image_size(module)
# here is where the error comes from
feature_extractor = hub.KerasLayer(URL,
input_shape=(height, width,3), trainable=True, tags={"train"})
At this time, hub.KerasLayer only works for TF2-style models like https://tfhub.dev/google/tf2-preview/inception_v3/feature_vector/4.
Please stay tuned for more choice of TF2-style models as TensorFlow 2.0 is getting released.
Related
I try to load the pre-trained u2net_human_seg.onnx model in my python program to use it for better background removing.
When I try it, I get an error: _pickle.UnpicklingError: invalid load key, '\x08'.
My code:
import rembg
from PIL import Image
import torch
import numpy as np
def remove_background(input_path):
input = Image.open(input_path)
output = remove(input)
output.save(input_path)
def remove(input):
input_tensor = torch.from_numpy(np.array(input)).float()
output = model(input_tensor)
output_image = Image.fromarray(output.detach().numpy())
return output_image
if __name__ == '__main__':
model = torch.load("models/u2net_human_seg.onnx")
remove_background("images/test.jpg")
My input is test, where 2 people are visible. The paths' should be correct...
I didn't find similar cases online as no one seems to load a model in a custom application.
I'm trying to build the body of the ResNet18 in this code:
from fastai.vision.data import create_body
from fastai.vision import models
from torchvision.models.resnet import resnet18
from fastai.vision.models.unet import DynamicUnet
import torch
def build_res_unet(n_input=1, n_output=2, size=256):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
body = create_body(resnet18, n_in = n_input, pretrained=True, cut=-2)
net_G = DynamicUnet(body, n_output, (size, size)).to(device)
return net_G
net_G = build_res_unet(n_input=1, n_output=2, size=256)
but I keep getting an error:
TypeError: create_body() got an unexpected keyword argument 'n_in'
but in the fastai docs the n_in parameter is present.
How can I create the body, am I missing something?
I tested the code on my local machine and it runs perfectly, maybe there are some problems on Google Colab! I will update this answer if I found a way to make it run on colab
EDIT: I solved the problem by adding !pip install fastai==2.4 on google colab, the version used by colab was very old
System information:
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
TensorFlow installed from (source or binary): pip installed
TensorFlow version (use command below): v2.0.0-rc2-26-g64c3d382ca 2.0.0
Python version: 3.7.1
Error:
Unable to save TensorFlow Keras LSTM model to SavedModel format for exporting to Google Cloud bucket.
Error Message:
ValueError: Attempted to save a function b'__inference_lstm_2_layer_call_fn_36083' which references a symbolic Tensor Tensor("dropout/mul_1:0", shape=(None, 1280), dtype=float32) that is not a simple constant. This is not supported.
Code:
import tensorflow as tf
import os
import cv2
import numpy as np
import matplotlib.pyplot as plt
import tqdm
import datetime
from sklearn.preprocessing import LabelBinarizer
model = tf.keras.Sequential([
tf.keras.layers.Masking(mask_value=0.),
tf.keras.layers.LSTM(512, dropout=0.5, recurrent_dropout=0.5),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(len(LABELS), activation='softmax')
])
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy', 'top_k_categorical_accuracy'])
test_file = 'C:/.../testlist01.txt'
train_file = 'C:/.../trainlist01.txt'
with open(test_file) as f:
test_list = [row.strip() for row in list(f)]
with open(train_file) as f:
train_list = [row.strip() for row in list(f)]
train_list = [row.split(' ')[0] for row in train_list]
def make_generator(file_list):
def generator():
np.random.shuffle(file_list)
for path in file_list:
full_path = os.path.join(BASE_PATH, path).replace('.avi', '.npy')
label = os.path.basename(os.path.dirname(path))
features = np.load(full_path)
padded_sequence = np.zeros((SEQUENCE_LENGTH, 1280))
padded_sequence[0:len(features)] = np.array(features)
transformed_label = encoder.transform([label])
yield padded_sequence, transformed_label[0]
return generator
train_dataset = tf.data.Dataset.from_generator(make_generator(train_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 1280), (len(LABELS))))
train_dataset = train_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
valid_dataset = tf.data.Dataset.from_generator(make_generator(test_list),
output_types=(tf.float32, tf.int16),
output_shapes=((SEQUENCE_LENGTH, 1280), (len(LABELS))))
valid_dataset = valid_dataset.batch(16).prefetch(tf.data.experimental.AUTOTUNE)
model.fit(train_dataset, epochs=17, validation_data=valid_dataset)
BASE_DIRECTORY = 'C:\\...\\saved_model\\LSTM\\1\\';
tf.saved_model.save(model, BASE_DIRECTORY)
In addition to the answer of The Guy with The Hat:
The .h5 part is sufficient to tell keras to store it as keras model save.
model.save('path_to_saved_model/model.h5')
should do the trick.
Try saving it with the Keras API, not the SavedModel API. See Save and serialize models with Keras: Export to SavedModel.
model.save('path_to_saved_model', save_format='tf')
That should save the model to a SavedModel format.
There is a bug I think and you need to set the dropout to 0 for the functions tf.saved_model.save and model.save(.., save_format='tf') to work
This seems to be a bug with both TensorFlow 2.0 and 2.1, after upgrading my TensorFlow to v2.2, it's working fine.
I'm trying to deploy a recently trained Keras model on Google Cloud ML Engine. I googled around to see what format the saved model needs to be for ML Engine and found this:
import keras.backend as K
import tensorflow as tf
from keras.models import load_model, Sequential
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants
from tensorflow.python.saved_model.signature_def_utils_impl import predict_signature_def
# reset session
K.clear_session()
sess = tf.Session()
K.set_session(sess)
# disable loading of learning nodes
K.set_learning_phase(0)
# load model
model = load_model('model.h5')
config = model.get_config()
weights = model.get_weights()
new_Model = Sequential.from_config(config)
new_Model.set_weights(weights)
# export saved model
export_path = 'YOUR_EXPORT_PATH' + '/export'
builder = saved_model_builder.SavedModelBuilder(export_path)
signature = predict_signature_def(inputs={'NAME_YOUR_INPUT': new_Model.input},
outputs={'NAME_YOUR_OUTPUT': new_Model.output})
with K.get_session() as sess:
builder.add_meta_graph_and_variables(sess=sess,
tags=[tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature})
builder.save()
However, in Keras 2.1.3, keras.backend no longer seems to have clear_session(), set_session(), or get_session(). What's the modern way of handling this issue? Do those functions live elsewhere now?
Thanks!
I'm trying to reproduce some results from the paper, describing Grad-CAM method, using Keras with Tensorflow-GPU backend, and obtain totally incorrect labels.
I've captured the screenshot of figure 1(a) from that paper and trying to make the pretrained VGG16 from Keras Applications to classify it.
Here is my image:
Here is my code (cell from the Jupyter notebook). Part of code was copied from the Keras manuals
import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
from keras import activations
from keras.applications import VGG16
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
%matplotlib inline
dog_img = imageio.imread(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png")
dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
dog_img = resize(dog_img, (224, 224, 3))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x, mode='tf')
pred = model.predict(x)
decode_predictions(pred)
Output:
[[('n03788365', 'mosquito_net', 0.017053505),
('n03291819', 'envelope', 0.015034639),
('n15075141', 'toilet_tissue', 0.012603286),
('n01737021', 'water_snake', 0.010620943),
('n04209239', 'shower_curtain', 0.009625845)]]
However, when I submit the same image to the online service, run by the paper authors, http://gradcam.cloudcv.org/classification, I see correct label "Boxer"
Here is the output from something that they call "Terminal":
Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.12565684319"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Completed the Classification Task
"Time taken for inference in torch: 9.0"
"Total time taken: 9.05940508842"
{"classify_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_243.png", "execution_time": 9.0, "label": 243.0, "classify_gb_gcam": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_gcam_243.png", "classify_gcam_raw": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gcam_raw_243.png", "input_image": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/Opera Snapshot_2018-09-24_133452_arxiv.org.png", "pred_label": 243.0, "classify_gb": "./media/grad_cam/classification/86560f84-bfe5-11e8-a657-22000b4a9274/classify_gb_243.png"}
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
Job published successfully
Publishing job to Classification Queue
Starting classification job on VGG_ILSVRC_16_layers.caffemodel
I use Anaconda Python 64-bit, on Windows 7.
Versions of relevant software on my PC:
keras 2.2.2 0
keras-applications 1.0.4 py36_1
keras-base 2.2.2 py36_0
keras-preprocessing 1.0.2 py36_1
tensorflow 1.10.0 eigen_py36h849fbd8_0
tensorflow-base 1.10.0 eigen_py36h45df0d8_0
What am I doing wrong? How can I get boxer label?
You cannot do the following line apparently
dog_img = dog_img[:, :, 0:3] # Opera has added alpha channel
So I loaded the image using a utility in Keras called load_img, which doesn't add the alpha channel.
The complete code
import imageio
from matplotlib import pyplot as plt
from skimage.transform import resize
import numpy as np
from keras import activations
from keras.applications import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input, decode_predictions
# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
dog_img = image.img_to_array(image.load_img(r"F:\tmp\Opera Snapshot_2018-09-24_133452_arxiv.org.png", target_size=(224, 224)))
x = np.expand_dims(dog_img, axis=0)
x = preprocess_input(x)
pred = model.predict(x)
print(decode_predictions(pred))
[[('n02108089', 'boxer', 0.29122102), ('n02108422', 'bull_mastiff', 0.199128), ('n02129604', 'tiger', 0.10050287), ('n02123159', 'tiger_cat', 0.09733449), ('n02109047', 'Great_Dane', 0.056869864)]]
Considering that all the output probabilities are very low and more or less equally distributed circa 0.01, my guess is that you are pre-processing the image incorrectly and passing some sort of scrambled image that looks like noise to model.predict(). Try to debug and imshow the image right before you predict().