Keras model deploying in tf-serving giving NAN outputs - python

I have built a Keras classifier that takes in an Image(224,224,3) and a text-encoding(5000) and gives out one output. The model works fine when using keras's predict function.
I saved the model using
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input[0],'input_text': model.input[1]},
outputs={t.name:t for t in model.outputs})
When I am running it with the Rest API it returns:
{u'predictions': [[nan, nan]]}
The way I am sending data is:
img = cv2.imread(image_path)
#rescale image
img = cv2.resize(img, (scale_size[0], scale_size[1]))
img = img.astype(np.float32)
img = img.astype('float32')
#subtract mean
img -= mean
text = np.load(text_path)
text = text.astype(np.float32)
payload = {
"signature_name":signature_constants.CLASSIFY_METHOD_NAME,
"instances": [
{
"input_image": img.tolist(),
"input_text": text.tolist()
}
]
}
# sending post request to TensorFlow Serving server
r = json.dumps(payload)
results = requests.post('http://localhost:9000/v1/models/<model-name>:predict', data=r)
pred = json.loads(results.content.decode('utf-8'))
print(pred)

Related

correct configuration path of custom object detection model

I have written an object detection model that uses the COCO dataset to detect the 91 different classes of objects. I was curious if I could write a custom object detection model to detect any additional classes that would also be detected. For instance, I trained a custom object detection model to detect two random classes (dog and panda) and wanted to create a python script that can detect the 91 classes present in the COCO dataset and the 2 additional classes for a total of 93-classifier detection model.
import cv2
import pyttsx3
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
# Model 1 - Coco Dataset (91 classes)
classNames = []
classFile = 'coco.names'
with open(classFile,'rt') as f:
classNames = [line.rstrip() for line in f]
# Found documentation for SSD mobilenet V3
configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
weightsPath = 'frozen_inference_graph_coco.pb'
net = cv2.dnn_DetectionModel(weightsPath, configPath)
net.setInputSize(320, 320)
net.setInputScale(1.0 / 127.5)
net.setInputMean((127.5, 127.5, 127.5))
net.setInputSwapRB(True)
# Model 2 - 2 classifier Model dog and panda
classNames2 = []
classFile2 = 'coco.names2' #this file contains 1. dog and 2. panda
with open(classFile2,'rt') as f:
classNames2 = [line.rstrip() for line in f]
configPath2 = 'ssd_mobilenet_v1_coco.config' # This is where I get the error - I am not sure which configuration path to use
weightsPath2 = 'frozen_inference_graph_dog_panda.pb'
net2 = cv2.dnn_DetectionModel(weightsPath2, configPath2)
net2.setInputSize(320, 320)
net2.setInputScale(1.0 / 127.5)
net2.setInputMean((127.5, 127.5, 127.5))
net2.setInputSwapRB(True)
engine = pyttsx3.init()
while True:
success, img = cap.read()
classIds, confs, bbox = net.detect(img, confThreshold=0.60, nmsThreshold=0.2)
classIds2, confs2, bbox2 = net2.detect(img, confThreshold=0.60, nmsThreshold=0.2)
print(classIds, bbox)
print(classIds2, bbox2)
if len(classIds2) != 0:
for classId2, confidence2, box2 in zip(classIds2.flatten(), confs2.flatten(), bbox2):
className2 = classNames2[classId2 - 1]
str2 = str(className2)
print(str2)
engine.say(str2 + "detected")
engine.runAndWait()
continue
if len(classIds) != 0:
for classId, confidence, box in zip(classIds.flatten(), confs.flatten(), bbox):
className = classNames[classId - 1]
str1 = str(className)
print(str1)
engine.say(str1 + "detected")
engine.runAndWait()
continue
if len(classIds and classIds2) == 0:
engine.say("no objects detected")
engine.runAndWait()
continue
cv2.imshow('Output', img)
cv2.waitKey(1)
My Error:
/Users/venuchannarayappa/PycharmProjects/ObjectDetector/venv/bin/python /Users/venuchannarayappa/PycharmProjects/ObjectDetector/Custom_Object_Detection_Voice_Feedback.py
[ERROR:0] global /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/dnn.cpp (3554) getLayerShapesRecursively OPENCV/DNN: []:(_input): getMemoryShapes() throws exception. inputs=1 outputs=0/0 blobs=0
[ERROR:0] global /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/dnn.cpp (3557) getLayerShapesRecursively input[0] = [ 1 3 300 300 ]
[ERROR:0] global /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/dnn.cpp (3567) getLayerShapesRecursively Exception message: OpenCV(4.5.4) /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/dnn.cpp:795: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'getMemoryShapes'
Traceback (most recent call last):
File "/Users/venuchannarayappa/PycharmProjects/ObjectDetector/Custom_Object_Detection_Voice_Feedback.py", line 43, in
classIds2, confs2, bbox2 = net2.detect(img, confThreshold=0.60, nmsThreshold=0.2)
cv2.error: OpenCV(4.5.4) /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/dnn.cpp:795: error: (-215:Assertion failed) inputs.size() == requiredOutputs in function 'getMemoryShapes'
Process finished with exit code 1
Here is a link to the files that I used to create my custom object detection model https://www.icloud.com/iclouddrive/0c9q7nxUN4bceeLg29XY157cw#my_model
Please help me in determining how to set the configuration path. I believe the .config file of the SSD Mobilenet V1 should be used but I keep getting a memory shape error. Any help is appreciated!
Also, I have tried changing net.setInputSize(320, 320) -> net.setInputSize(300, 300). This change gave me the same error.
Finally, I also tried using the same configuration and weights path for the COCO dataset (configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
weightsPath = 'frozen_inference_graph_coco.pb' both times) and the code runs properly. I believe this means the code is functional but there is some shape error when I use my own frozen inference graph with any .config or .pbtxt file.
If anyone is able to train their own custom model, they would need to change the 'coco.names' file to display the correct classes and input the configuration and weights path. I would highly appreciate it.

Integrating a 2D Medical Imaging X-Ray classifier which was trained on jpegs with a script which receives DCM files to be able to diagnose dicom files

Below is the order of how I am going to present my problem:
First I will show you the script .py that I am using to run the web app in a local host(flask app). This web app is a classifier which shows you whether a person has either Viral Pneumonia, Bacterial Pneumonia or they are Normal. Thus there are three classes(Viral, Bacterial or Normal) looking from chest x-rays which are in jpeg format.
Second I will show you the differnt .py script for Binary Classification for Pneumonia which is taking in raw dicom files and converting them into numpy arrays before they are diagnosed.
So to achieve diagnosis I am trying to integrate my app.py script which takes in jpegs, with the Pneumonia binary classification which takes in dicom files so as to take advantage of the dicom processing function of the second script but using all of the information and weights of the Viral and Bacterial one that I have, so that it can be used in a clinical setup. Clinical setups use dicom files not jpegs, that is why I am trying to combine these two scripts to reach the goal.
Below is my app.py script for Viral and Bacterial Pneumonia Classification which takes in jpegs, which I am trying to integrate on the other script that I am going to attach further below:
#::: Import modules and packages :::
# Flask utils
from flask import Flask, redirect, url_for, request, render_template
from werkzeug.utils import secure_filename
from gevent.pywsgi import WSGIServer
# Import Keras dependencies
from tensorflow.keras.models import model_from_json
from tensorflow.python.framework import ops
ops.reset_default_graph()
from keras.preprocessing import image
# Import other dependecies
import numpy as np
import h5py
from PIL import Image
import PIL
import os
#::: Flask App Engine :::
# Define a Flask app
app = Flask(__name__)
# ::: Prepare Keras Model :::
# Model files
MODEL_ARCHITECTURE = './model/model_adam.json'
MODEL_WEIGHTS = './model/model_100_eopchs_adam_20190807.h5'
# Load the model from external files
json_file = open(MODEL_ARCHITECTURE)
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
# Get weights into the model
model.load_weights(MODEL_WEIGHTS)
print('Model loaded. Check http://127.0.0.1:5000/')
# ::: MODEL FUNCTIONS :::
def model_predict(img_path, model):
'''
Args:
-- img_path : an URL path where a given image is stored.
-- model : a given Keras CNN model.
'''
IMG = image.load_img(img_path).convert('L')
print(type(IMG))
# Pre-processing the image
IMG_ = IMG.resize((257, 342))
print(type(IMG_))
IMG_ = np.asarray(IMG_)
print(IMG_.shape)
IMG_ = np.true_divide(IMG_, 255)
IMG_ = IMG_.reshape(1, 342, 257, 1)
print(type(IMG_), IMG_.shape)
print(model)
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='rmsprop')
predict_x = model.predict(IMG_)
print(predict_x)
prediction = np.argmax(predict_x,axis=1)
print(prediction)
return prediction
# ::: FLASK ROUTES
#app.route('/', methods=['GET'])
def index():
# Main Page
return render_template('index.html')
#app.route('/predict', methods=['GET', 'POST'])
def upload():
# Constants:
classes = {'TRAIN': ['BACTERIA', 'NORMAL', 'VIRUS'],
'VALIDATION': ['BACTERIA', 'NORMAL'],
'TEST': ['BACTERIA', 'NORMAL', 'VIRUS']}
if request.method == 'POST':
# Get the file from post request
f = request.files['file']
# Save the file to ./uploads
basepath = os.path.dirname(__file__)
file_path = os.path.join(
basepath, 'uploads', secure_filename(f.filename))
f.save(file_path)
# Make a prediction
prediction = model_predict(file_path, model)
predicted_class = classes['TRAIN'][prediction[0]]
print('We think that is {}.'.format(predicted_class.lower()))
return str(predicted_class).lower()
if __name__ == '__main__':
app.run(debug = True)`
Below again is the already functioning script of Pneumonia binary classification which is taking in dicom files that I am trying to integrate with the weights and preprocessing information of the Viral and Bacterial classifier that I want to use:
## Loading standard modules and libraries
import numpy as np
import pandas as pd
import pydicom
%matplotlib inline
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_json
from skimage.transform import resize
# This function reads in a .dcm file, checks the important fields for our device, and returns a numpy array
# of just the imaging data
def check_dicom(filename):
print('Loading file {} ...'.format(filename))
ds = pydicom.dcmread(filename)
if (ds.BodyPartExamined !='CHEST') | (ds.Modality !='DX') | (ds.PatientPosition not in ['PA', 'AP']):
print('The image is not valid because the image position, the image type or the body part is not as per standards')
return
else:
print('ID:', ds.PatientID,
'Age:', ds.PatientAge,
'Modality:', ds.Modality,
'Postion: ', ds.PatientPosition,
'Body Part: ', ds.BodyPartExamined,
'Study Desc: ', ds.StudyDescription)
img = ds.pixel_array
return img
# This function takes the numpy array output by check_dicom and
# runs the appropriate pre-processing needed for our model input
def preprocess_image(img,img_mean,img_std,img_size):
# todo
img = resize(img, (224,224))
img = img / 255.0
grey_img = (img - img_mean) / img_std
proc_img = np.zeros((224,224,3))
proc_img[:, :, 0] = grey_img
proc_img[:, :, 1] = grey_img
proc_img[:, :, 2] = grey_img
proc_img = np.resize(proc_img, img_size)
return proc_img
# This function loads in our trained model w/ weights and compiles it
def load_model(model_path, weight_path):
# todo
json_file = open(model_path, 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights(weight_path)
return model
# This function uses our device's threshold parameters to predict whether or not
# the image shows the presence of pneumonia using our trained model
def predict_image(model, img, thresh):
# todo
result = model.predict(img)
print('Predicted value:', result)
predict=result[0]
prediction = "Negative"
if(predict > thresh):
prediction = "Positive"
return prediction
# This function uses our device's threshold parameters to predict whether or not
# the image shows the presence of pneumonia using our trained model
def predict_image(model, img, thresh):
# todo
result = model.predict(img)
print('Predicted value:', result)
predict=result[0]
prediction = "Negative"
if(predict > thresh):
prediction = "Positive"
return prediction
test_dicoms = ['test1.dcm','test2.dcm','test3.dcm','test4.dcm','test5.dcm','test6.dcm']
model_path = "my_model2.json" #path to saved model
weight_path = "xray_class_my_model2.best.hdf5" #path to saved best weights
IMG_SIZE=(1,224,224,3) # This might be different if you did not use vgg16
img_mean = 0.49262813 # mean image value from Build and train model line 22
img_std = 0.24496286 # loads the std dev from Build and train model line 22
my_model = load_model(model_path, weight_path) #loads model
thresh = 0.62786263 #threshold value for New Model2 from Build and train model line 66 at 80% Precision
# use the .dcm files to test your prediction
for i in test_dicoms:
img = np.array([])
img = check_dicom(i)
if img is None:
continue
img_proc = preprocess_image(img,img_mean,img_std,IMG_SIZE)
pred = predict_image(my_model,img_proc,thresh)
print('Model Classification:', pred , 'for Pneumonia' )
print('--------------------------------------------------------------------------------------------------------')
Output of above script:
Loading file test1.dcm ...
ID: 2 Age: 81 Modality: DX Postion: PA Body Part: CHEST Study Desc: No Finding
Predicted value: [[0.4775539]]
Model Classification: Negative for Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test2.dcm ...
ID: 1 Age: 58 Modality: DX Postion: AP Body Part: CHEST Study Desc: Cardiomegaly
Predicted value: [[0.47687072]]
Model Classification: Negative for Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test3.dcm ...
ID: 61 Age: 77 Modality: DX Postion: AP Body Part: CHEST Study Desc: Effusion
Predicted value: [[0.47764364]]
Model Classification: Negative for Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test4.dcm ...
The image is not valid because the image position, the image type or the body part is not as per standards
Loading file test5.dcm ...
The image is not valid because the image position, the image type or the body part is not as per standards
Loading file test6.dcm ...
The image is not valid because the image position, the image type or the body part is not as per standards
Threshold of 0.62786263 is considered at 80% Precision
Below is what I have tried so far but the diagnosis I am getting is always Viral on each and every dicom sample:
## Loading standard modules and libraries
import numpy as np
import pandas as pd
import pydicom
from PIL import Image
#%matplotlib inline
import matplotlib.pyplot as plt
import keras
from keras.models import Sequential
from keras.layers import Dense
from keras.models import model_from_json
from keras.preprocessing import image
from skimage.transform import resize
# This function reads in a .dcm file, checks the important fields for our device, and returns a numpy array
# of just the imaging data
def check_dicom(filename):
print('Loading file {} ...'.format(filename))
ds = pydicom.dcmread(filename)
if (ds.BodyPartExamined !='CHEST'): #| (ds.Modality !='DX'): #| (ds.PatientPosition not in ['PA', 'AP']):
print('The image is not valid because the image position, the image type or the body part is not as per standards')
return
else:
print('ID:', ds.PatientID,
'Age:', ds.PatientAge,
'Modality:', ds.Modality,
'Postion: ', ds.PatientPosition,
'Body Part: ', ds.BodyPartExamined,
'Study Desc: ', ds.StudyDescription)
img = ds.pixel_array
return img
# This function takes the numpy array output by check_dicom and
# runs the appropriate pre-processing needed for our model input
def preprocess_image(img):
# todo
#im = np.reshape(img, (342,257 ))
#im = np.arange(257)
#img = Image.fromarray(im)
#img = image.load_img(img).convert('L')
img = resize(img, (342,257))
grey_img = img / 255.0
#grey_img = (img - img_mean) / img_std
proc_img = np.zeros((1,342,257,1))
proc_img[:, :, :, 0] = grey_img
#proc_img[:, :, :, 1] = grey_img
#proc_img[:, :, :, 2] = grey_img
proc_img = proc_img.reshape(1, 342, 257, 1)
return proc_img
# This function loads in our trained model w/ weights and compiles it
def load_model(model_path, weight_path):
# todo
json_file = open(model_path, 'r')
loaded_model_json = json_file.read()
json_file.close()
model = model_from_json(loaded_model_json)
model.load_weights(weight_path)
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='rmsprop')
return model
# This function uses our device's threshold parameters to predict whether or not
# the image shows the presence of pneumonia using our trained model
def predict_image(model, img):
# todo
model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='rmsprop')
#x = np.expand_dims(img, axis=0)
predict_x= model.predict(img)
print(predict_x)
prediction = np.argmax(predict_x,axis=1)
print(prediction)
return prediction
test_dicoms = ['test3.dcm','test2.dcm','test1.dcm','test4.dcm','test5.dcm','test6.dcm']
model_path = "model_adam.json" #path to saved model
weight_path = "model.h5" #path to saved best weights
#IMG_SIZE=(1,342,257,1) # This might be different if you did not use vgg16
#img_mean = 0.49262813 # mean image value from Build and train model line 22
#img_std = 0.24496286 # loads the std dev from Build and train model line 22
#my_model = load_model(model_path, weight_path) #loads model
#thresh = 0.62786263 #threshold value for New Model2 from Build and train model line 66 at 80% Precision
# use the .dcm files to test your prediction
for i in test_dicoms:
img = np.array([])
img = check_dicom(i)
if img is None:
continue
classes = {'TRAIN': ['BACTERIAL', 'NORMAL', 'VIRAL'],
'VALIDATION': ['BACTERIA', 'NORMAL'],
'TEST': ['BACTERIA', 'NORMAL', 'VIRUS']}
img_proc = preprocess_image(img)
prediction = predict_image(load_model(model_path, weight_path),img_proc)
predicted_class = classes['TRAIN'][int(prediction[0])]
print('Model Classification:', predicted_class, 'Pneumonia' )
print('--------------------------------------------------------------------------------------------------------')
Below is the output:
2022-01-02 10:50:00.817561: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2022-01-02 10:50:00.817601: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Loading file test3.dcm ...
ID: 61 Age: 77 Modality: DX Postion: AP Body Part: CHEST Study Desc: Effusion
2022-01-02 10:50:02.652828: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2022-01-02 10:50:02.652859: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303)
2022-01-02 10:50:02.652899: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (Wisdom-HP-250-G3-Notebook-PC): /proc/driver/nvidia/version does not exist
2022-01-02 10:50:02.653123: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
[[0.01132523 0.00254696 0.98612785]]
[2]
Model Classification: VIRAL Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test2.dcm ...
ID: 1 Age: 58 Modality: DX Postion: AP Body Part: CHEST Study Desc: Cardiomegaly
[[0.01112939 0.00251635 0.9863543 ]]
[2]
Model Classification: VIRAL Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test1.dcm ...
ID: 2 Age: 81 Modality: DX Postion: PA Body Part: CHEST Study Desc: No Finding
[[0.01128576 0.00255111 0.9861631 ]]
[2]
Model Classification: VIRAL Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test4.dcm ...
The image is not valid because the image position, the image type or the body part is not as per standards
Loading file test5.dcm ...
ID: 2 Age: 81 Modality: CT Postion: PA Body Part: CHEST Study Desc: No Finding
[[0.01128576 0.00255111 0.9861631 ]]
[2]
Model Classification: VIRAL Pneumonia
--------------------------------------------------------------------------------------------------------
Loading file test6.dcm ...
ID: 2 Age: 81 Modality: DX Postion: XX Body Part: CHEST Study Desc: No Finding
WARNING:tensorflow:5 out of the last 5 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7fba38ed19d0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating #tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your #tf.function outside of the loop. For (2), #tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
[[0.01128576 0.00255111 0.9861631 ]]
[2]
Model Classification: VIRAL Pneumonia
---------------------------------------
My suspicion is that I did it wrong on the image preprocessing steps when I have integrated these two scripts (Remember: The goal is to take advantage of the Dicom reading function of the second script). Thus the model is taking in and predicting wrong input altogether due to wrong array arrangements on trying to preprocess when I have integrated these two scripts.
If in need of some information on parameters in the jupyter training presentation of the model kindly highlight.
When a classifier work okay in train/test but not when doing inference in production, a very common reason is that the training data was processed differently from the production data. The fix is to make sure it is processed the same, ideally using the same bit of code.
How were the jpegs the classifier was trained on processed? Do the originally come from dicoms? If yes, what was the exact code for the conversion?
How were the jpegs loaded during training? Pay special attention to bits that modify the data rather than merely copy it, such as grey_img = (img - img_mean) / img_std and the other commented out lines in your code (maybe they were not commented out during training)
If you copy the dicom->jpeg conversion from 1 and the jpeg loading from 2, you will probably have a working prediction
The below dicom to jpeg conversion function did the job for me:
def take_dicom(dicomname):
ds = read_file('Dicom_files/' + dicomname)
im = fromarray(ds.pixel_array)
final_img = im.save('./Jpeg/' + dicomname + '.jpg')
pure_jpg = dicomname + '.jpg'
return pure_jpg
Just had to use the os function to point my prediction function to where it should pick these jpegs before they are preprocessed and diagnosed:
def preprocess_image(pure_jpg):
'''
Args:
-- img_path : an URL path where a given image is stored.
-- model : a given Keras CNN model.
'''
#print(pure_jpg)
basepath = os.path.dirname('./Jpeg/')
file_path = os.path.join(
basepath, img)
#image = take_dicom(file_path)
#print(str(image))
IMG = image.load_img(file_path).convert('L')
#print(IMG)
#print(type(IMG))
# Pre-processing the image
IMG_ = IMG.resize((257, 342))
#print(type(IMG_))
IMG_ = np.asarray(IMG_)
#print(IMG_.shape)
IMG_ = np.true_divide(IMG_, 255)
IMG_ = IMG_.reshape(1, 342, 257, 1)
#print(type(IMG_), IMG_.shape)
return IMG_
However, the problem is that it's only working for the following two dicom imaging modalities:
DX (Digital X-Ray)
CT (Computed Tormography)
CR (Computed Radiography) dicom images are failing to convert.

how to decode predictions in tensorflow_serving of predictions

I am using tensorflow serving to serve YOLO_V3 keras based model
Now ,everything is working but just prediction that Im getting is a np array , how can I decode to a actual classes
This is the function that I'm using
import argparse
import json
import numpy as np
import requests
from keras.preprocessing import image
# Argument parser for giving input image_path from command line
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
help="path of the image")
args = vars(ap.parse_args())
image_path = args['image']
# Preprocessing our input image
img = image.img_to_array(image.load_img(image_path, target_size=(224, 224))) / 255.
# this line is added because of a bug in tf_serving(1.10.0-dev)
img = img.astype('float16')
payload = {
"instances": [{'input_image': img.tolist()}]
}
# sending post request to TensorFlow Serving server
r = requests.post('http://localhost:9000/v1/models/coco:predict', json=payload)
pred = json.loads(r.content.decode('utf-8'))
print(json.dumps(inception_v3.decode_predictions(np.array(pred['predictions']))[0]))
print(pred)
# Decoding the response
now this gives a big list of dictionaries
{6876, -4.81754398, -4.97275639, -4.97607, -4.67160654, -5.79011679, -5.50944614, -4.7312355, -4.77109718, -3.92586088, -4.7312994, -5.91286898, -4.75739, -5.51783752, -3.74576402, -3.78617764, -4.80710745, -3.40718174, -6.06222868, -6.37612915, -4.38785028, -4.47397232, -4.61613226, -3.94074273, -4.83034134, -5.37178898, -6.06883144, -6.90097952, -5.8203454, -5.86746645, -5.82842541, -6.18484926, -6.35572672, -5.26999807, -6.18502855, -5.94293404, -6.49861908, -5.93721819, -6.14823675, -5.90117025, -2.50677276, -2.75121665, -2.7598753, -3.41550899, -2.62428784, -4.74949789, -5.21396255, -3.80554819, -6.01200724, -5.17863417, -5.40073681, -6.22671843, -5.16459131, -5.0060482, -6.87409353, -4.7619772, -4.86423874, -3.08233714, -5.99755383, -5.63631535, -5.94745159, -4.65718126, -6.86422825, -6.06967211, -0.217412367, 0.0550900251, -0.707384586, 0.70622, -15.0436678, 1.10565865, -4.42187643, -2.98756099, -3.72572565, -3.26745987, -2.93719411, -4.07914925, -2.34109211, -3.39609504, -4.85195971, -3.63920546, -5.17407417, -5.62931442, -4.00350428, -5.02346706, -4.95578051, -4.48103857, -4.72352171, -6.18291044, -5.11228895, -4.26431751, -5.34162378, -5.54107809, -3.85009646, -4.6390605, -3.89385056, -4.55916357, -4.70904684, -4.15174866, -6.43089485, -4.33550072, -4.58510923, -5.21521187, -3.22094584, -5.75563383, -6.88566399, -4.80700207, -4.38226604, -5.49326801, -3.26696324, -4.9029026, -5.82695913, -6.17655087, -6.67043591, -5.67217445, -6.74844885, -5.9330287, -5.71555614, -6.11479235, -4.86581087, -5.67418, -5.19628048, -6.53483, -5.78065157, -5.86239386, -5.4403944, -3.27615428, -3.07651854, -2.16416335, -3.98195553, -4.52128935, -4.46353388, -4.64035559, -4.67165565, -6.01290417, -5.59658194, -5.59060287, -6.70377827, -6.25058842, -5.20660543, -6.53567076, -5.66916752, -3.13120627, -3.95540428, -5.51402569, -5.0683589, -6.33612585, -4.35945034, -6.52084446, -5.74546289]]]}]}
How do I decode it into classes ?
Doing this gives class mismatch error because its coco pre-trained model not imagenet
what will be a similar decoding function for this ?
print(json.dumps(inception_v3.decode_predictions(np.array(pred['predictions']))[0]))

Prediction on model exported model using tf.estimator on a base64 input in JSON

i want to built a rest api for model prediction my model accepts base64 string of image in json file as i have tested over google cloud ML. and it is not accepting predict request from android app so i am trying to use rest api. from python script my prediction is giving me so many errors by just testing locally without api yet
here is my export model code
import os
import shutil
import tensorflow as tf
from tensorflow import keras
HEIGHT = 48
WIDTH = 48
CHANNELS = 1
version = 'v1'
h5_model_path = os.path.join('model_4layer_2_2_pool yes.h5')
tf_model_path = os.path.join('D:/university/working/trying/Facial-
Expression-Recognition-master/tryMore')
estimator = keras.estimator.model_to_estimator(
keras_model_path=h5_model_path,
model_dir=tf_model_path)
def image_preprocessing(image):
image = tf.expand_dims(image, 0)
image = tf.image.resize_bilinear(image, [HEIGHT, WIDTH],
align_corners=False)
image = tf.squeeze(image, axis=[0])
image = tf.cast(image, dtype=tf.uint8)
return image
#
IMAGE AS BASE64 BYTES
def serving_input_receiver_fn():
def prepare_image(image_str_tensor):
image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
return image_preprocessing(image)
input_ph = tf.placeholder(tf.string, shape=[None])
images_tensor = tf.map_fn(
prepare_image, input_ph, back_prop=False, dtype=tf.uint8)
images_tensor = tf.image.convert_image_dtype(images_tensor,
dtype=tf.float32)
return tf.estimator.export.ServingInputReceiver(
{'conv2d_1_input': images_tensor},
{'image_bytes': input_ph})
export_path = os.path.join('models\json_b64', version)
if os.path.exists(export_path): # clean up old exports with this
version
shutil.rmtree(export_path)
estimator.export_savedmodel(
export_path,
serving_input_receiver_fn=serving_input_receiver_fn)
and my code for prediction is as follows
import tensorflow as tf
import pickle
import os
import imgJSON
dir_path = os.path.dirname(__file__)
exported_path= os.path.join(dir_path, "models/json_b64/v1/1539157474")
model_input=imgJSON.img_bytes
global data
def main():
with tf.Session() as sess:
Model=tf.saved_model.loader.load(sess,
[tf.saved_model.tag_constants.SERVING], exported_path)
predictor= tf.contrib.predictor.from_saved_model(exported_path)
import json
with open("D:/university/working/trying/Facial-Expression-Recognition-
master/tryMore/test_json_b64.json") as f:
data = json.loads(f.read())
print(data["image_bytes"])
output_dict= predictor({"image_bytes":[model_input]})
print(" prediction is " , output_dict['scores'])
if __name__ == "__main__":
main()
my error message is as follows:
InvalidArgumentError (see above for traceback): Expected image (JPEG, PNG, or GIF), got unknown format starting with '/9j/4AAQSkZJRgAB'
[[Node: map/while/DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=1, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/TensorArrayReadV3)]]
how to predict passing base64 string to my respective exported model?

Exported Keras classification model served by TF Server gives: Expects arg[0] to be float but string is provided

I have trained a classification model in Keras (latest version of Keras and TF as per this writing) which is similar in input and output as CIFAR10. To serve this model I export it to a classification model (see the type) using the following code:
def keras_model_to_tf_serve(saved_keras_model,
local_version_dir,
type='classification',
save_model_version=1):
sess = tf.Session()
K.set_session(sess)
K.set_learning_phase(0)
old_model = load_model(saved_keras_model)
config = old_model.get_config()
weights = old_model.get_weights()
new_model = Sequential.from_config(config)
new_model.set_weights(weights)
classification_inputs = utils.build_tensor_info(new_model.input)
classification_outputs_classes = utils.build_tensor_info(new_model.output)
# The classification signature
classification_signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs={signature_constants.CLASSIFY_INPUTS: classification_inputs},
outputs={
signature_constants.CLASSIFY_OUTPUT_CLASSES:
classification_outputs_classes
},
method_name=signature_constants.CLASSIFY_METHOD_NAME)
#print(classification_signature)
# The prediction signature
tensor_info_x = utils.build_tensor_info(new_model.input)
tensor_info_y = utils.build_tensor_info(new_model.output)
prediction_signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs={'inputs': tensor_info_x},
outputs={'outputs': tensor_info_y},
method_name=signature_constants.PREDICT_METHOD_NAME)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')
print(prediction_signature)
save_model_dir = os.path.join(local_version_dir,str(save_model_version))
if os.path.exists(save_model_dir) and os.path.isdir(save_model_dir):
shutil.rmtree(save_model_dir)
builder = saved_model_builder.SavedModelBuilder(save_model_dir)
with K.get_session() as sess:
if type == 'classification':
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature,
},
clear_devices=True, legacy_init_op=legacy_init_op)
elif type == 'prediction':
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
# Uncomment the first two lines below and comment out the subsequent four to reset.
# signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
# classification_signature,
'predict_results':
prediction_signature,
},
clear_devices=True, legacy_init_op=legacy_init_op)
else:
builder.add_meta_graph_and_variables(
sess, [tag_constants.SERVING],
signature_def_map={
# Uncomment the first two lines below and comment out the subsequent four to reset.
# signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
# classification_signature,
'predict_results':
prediction_signature,
signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
classification_signature
},
clear_devices=True, legacy_init_op=legacy_init_op)
builder.save()
This exports fine, and using saved_model_cli I get the following output:
saved_model_cli show --dir /develop/1/ --tag_set serve --
signature_def serving_default
The given SavedModel SignatureDef contains the following input(s):
inputs['inputs'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 32, 32, 3)
name: conv2d_1_input_1:0
The given SavedModel SignatureDef contains the following output(s):
outputs['classes'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 10)
name: activation_6_1/Softmax:0
Method name is: tensorflow/serving/classify
Thus the model expects to get DT_Float of shape (-1,32,32,3). As this is a classification model (which for some reason is /very/ different from a prediction model in how to use it), I took #sdcbr code (TensorFlow Serving: Pass image to classifier) and made some minute modifications:
import tensorflow as tf
import numpy as np
from tensorflow_serving.apis import classification_pb2, input_pb2
from grpc.beta import implementations
from tensorflow_serving.apis import prediction_service_pb2
image = np.random.rand(32,32,3)
def _float_feature(value):
return tf.train.Feature(float_list=tf.train.FloatList(value=value))
request = classification_pb2.ClassificationRequest()
request.model_spec.name = 'model'
request.model_spec.signature_name = signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
image = image.flatten().tolist()
image = [float(x) for x in image]
example = tf.train.Example(features=tf.train.Features(feature={'image': _float_feature(image)}))
inp = input_pb2.Input()
inp.example_list.examples.extend([example])
request.input.CopyFrom(inp)
channel = implementations.insecure_channel('localhost', 5005)
stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
response = stub.Classify(request, 10.0)
where TF-Serve is running locally on my machine on the port and is given the spec_name when starting. This, as far as I can see, should work but when I run it I get the following error (shortened here for brevity):
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Expects arg[0] to be float but string is provided"
debug_error_string = "
{"created":"#1533046733.211573219","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1083,"grpc_message":"Expects arg[0] to be float but string is provided","grpc_status":3}"
Any ideas? After hours of searching this was the only way, I could get any sort of image data placed into a classification request.
Try using prediction_service_pb2_grpc.PredictionServiceStub(channel) instead of prediction_service_pb2.beta_create_PredictionService_stub(channel). Apparently this was recently moved from beta. You can refer to this example.

Categories