Object Detection MRCNN for Multiple Classes - python

I wanted to detect tumor in an MRI scan as well as brain and create a mask on both of them.
I have used following code for creating mask only on tumor.
link to notebook is :
https://colab.research.google.com/github/pysource7/utilities/blob/master/Train_Mask_RCNN_(DEMO).ipynb#scrollTo=SyzLXzF5BqiN
Please tell how to make it run for multiple classes.
I am new in this field, I would be highly delighted if someone could help.
This is what I have tried for training the model.
%tensorflow_version 1.x
!pip install --upgrade h5py==2.10.0
!wget https://pysource.com/extra_files/Mask_RCNN_basic_1.zip
!unzip Mask_RCNN_basic_1.zip
import sys
sys.path.append("/content/Mask_RCNN/mrcnn")
from m_rcnn import *
%matplotlib inline
# Extract Images
images_path = "images.zip"
annotations_path = "annotations.json"
extract_images(os.path.join("/content/",images_path), "/content/dataset")
dataset_train = load_image_dataset(os.path.join("/content/", annotations_path), "/content/dataset", "train")
dataset_val = load_image_dataset(os.path.join("/content/", annotations_path), "/content/dataset", "val")
class_number = dataset_train.count_classes()
print('Train: %d' % len(dataset_train.image_ids))
print('Validation: %d' % len(dataset_val.image_ids))
print("Classes: {}".format(class_number))
# Load image samples
display_image_samples(dataset_train)
# Load Configuration
config = CustomConfig(class_number)
#config.display()
model = load_training_model(config)
# Start Training
# This operation might take a long time.
train_head(model, dataset_train, dataset_train, config)

Related

How can I load a pre-trained onnx model in rembg Python script?

I try to load the pre-trained u2net_human_seg.onnx model in my python program to use it for better background removing.
When I try it, I get an error: _pickle.UnpicklingError: invalid load key, '\x08'.
My code:
import rembg
from PIL import Image
import torch
import numpy as np
def remove_background(input_path):
input = Image.open(input_path)
output = remove(input)
output.save(input_path)
def remove(input):
input_tensor = torch.from_numpy(np.array(input)).float()
output = model(input_tensor)
output_image = Image.fromarray(output.detach().numpy())
return output_image
if __name__ == '__main__':
model = torch.load("models/u2net_human_seg.onnx")
remove_background("images/test.jpg")
My input is test, where 2 people are visible. The paths' should be correct...
I didn't find similar cases online as no one seems to load a model in a custom application.

Could not find class for TF Ops: TensorListFromTensor when I'm trying to import a trained model with Tensorflow in DeepLearning4j

I'm new to Tensorflow and I'm trying to import a frozen graph (.pb file) that was trained in Python into a Java project using Deeplearning4j.
It seems that the model was saved successfully and it is working in Python, but when I try to import it with DL4J I'm getting the following issue and I don't know why:
Exception in thread "main" java.lang.IllegalStateException: Could not find class for TF Ops: TensorListFromTensor
at org.nd4j.common.base.Preconditions.throwStateEx(Preconditions.java:639)
at org.nd4j.common.base.Preconditions.checkState(Preconditions.java:301)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:283)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:141)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:87)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:73)
at MLModel.loadModel(MLModel.java:30)
This is my model in Python:
def RNN():
inputs = tf.keras.layers.Input(name='inputs',shape=[max_len])
layer = tf.keras.layers.Embedding(max_words,50,input_length=max_len)(inputs)
layer = tf.keras.layers.LSTM(64)(layer)
layer = tf.keras.layers.Dense(256,name='FC1')(layer)
layer = tf.keras.layers.Activation('relu')(layer)
layer = tf.keras.layers.Dropout(0.5)(layer)
layer = tf.keras.layers.Dense(12,name='out_layer')(layer)
layer = tf.keras.layers.Activation('softmax')(layer)
model = tf.keras.models.Model(inputs=inputs,outputs=layer)
return model
Actually I based on this blog how to export the model: Save, Load and Inference From TensorFlow 2.x Frozen Graph
And this is how I'm trying to import the model in Java with DeepLearning4J:
public static void loadModel(String filepath) throws Exception{
File file = new File(filepath);
if (!file.exists()){
file = new File(filepath);
}
sd = TFGraphMapper.importGraph(file);
if (sd == null) {
throw new Exception("Error loading model : " + file);
}
}
I'm getting the exception in sd = TFGraphMapper.importGraph(file);
Does anyone know if I'm missing something?
That is the old model import. Please use the new one. The old one is not and will not be supported. You can find that here:
https://deeplearning4j.konduit.ai/samediff/explanation/model-import-framework
Both tensorflow and onnx work similarly. For tensorflow use:
//create the framework importer
TensorflowFrameworkImporter tensorflowFrameworkImporter = new TensorflowFrameworkImporter();
File pathToPbFile = ...;
SameDiff graph = tensorflowFrameworkImporter.runImport(pathToPbFile.getAbsolutePath(),Collections.emptyMap());
File an issue on the github repo: https://github.com/deeplearning4j/deeplearning4j/issues/new if something doesn't work for you.
Also note that if you use the tf keras api you can also import it using the keras hdf5 format (the old one).
For many graphs, you may also need to save the model and freeze it. You can use that here:
def convert_saved_model(saved_model_dir) -> GraphDef:
"""
Convert the saved model (expanded as a directory)
to a frozen graph def
:param saved_model_dir: the input model directory
:return: the loaded graph def with all parameters in the model
"""
saved_model = tf.saved_model.load(saved_model_dir)
graph_def = saved_model.signatures['serving_default']
frozen = convert_variables_to_constants_v2(graph_def)
return frozen.graph.as_graph_def()
We publish more code and utilities for that kind of thing here:
https://github.com/deeplearning4j/deeplearning4j/tree/master/contrib/omnihub/src/omnihub/frameworks

No valid model found in run history. This means smac was not able to fit a valid model. Please check the log file for errors

Here I have total of 1000+ datasets on which i have to train same number of models and save them in a folder called models.
This code is working very well and I'm getting what I want. Only issue I'm facing is around 554th model, it is giving me this error.
No valid model found in run history. This means smac was not able to fit a valid model.
Please check the log file for errors.
Am I doing anything wrong here?
My code:
from joblib import Parallel, delayed
from sklearn.svm import LinearSVC
import numpy as np
import pandas as pd
import autosklearn.regression
import pickle
import timeit
import os
import warnings
warnings.filterwarnings("ignore")
def train_model(filename):
print('Reading Dataset: '+str(filename))
data = pd.read_csv(filename)
train_data = data[data['state'] == 'done']
automl = autosklearn.regression.AutoSklearnRegressor(
time_left_for_this_task=30,
metric=autosklearn.metrics.r2,
memory_limit=None
)
X_train = train_data[['feature1','feature2']]
y_train = train_data[['target_column']]
print("Training Started: "+str(filename))
automl.fit(X_train, y_train)
print('Saving Model: '+str(filename))
model_path = 'models/'+str(filename.split('.')[0])
if not os.path.exists(model_path):
os.makedirs(model_path)
model_filename = model_path+'/finalized_model.sav'
pickle.dump(automl, open(model_filename, 'wb'))
return True
if __name__ == "__main__":
start = timeit.default_timer()
result = Parallel(n_jobs=4)(delayed(train_model)(filename) for filename in ['dataset_1.csv', 'dataset_2.csv', 'dataset_3.csv',..., 'dataset_n.csv'])
stop = timeit.default_timer()
print('Time: ', (stop - start)/60, 'Minutes')
I found the cause of the issue.
This is because of the less memory left in RAM.
I didn't get any documentation regarding this
But I checked the RAM utilisation continuously while running the script and when there is no memory left, the script terminated with the error above.
If anyone have more information regarding this. Their contribution will be more helpful for the community.

Problem building tensorflow model from huggingface weights

I need to work with the pretrained BERT model ('dbmdz/bert-base-italian-xxl-cased') from Huggingface with Tensorflow (at this link).
After reading this on the website,
Currently only PyTorch-Transformers compatible weights are available. If you need access to TensorFlow checkpoints, please raise an issue!
I raised the issue and promptly a download link to an archive containing the following files was given to me. The files are the following ones:
$ ls bert-base-italian-xxl-cased/
config.json model.ckpt.index vocab.txt
model.ckpt.data-00000-of-00001 model.ckpt.meta
I'm now trying to load the model and work with it but everything I tried failed.
I tried following this suggestion from an Huggingface discussion site:
bert_folder = str(Config.MODELS_CONFIG.BERT_CHECKPOINT_DIR) # folder in which I have the files extracted from the archive
from transformers import BertConfig, TFBertModel
config = BertConfig.from_pretrained(bert_folder) # this gets loaded correctly
After this point I tried several combinations in order to load the model but always unsuccessfully.
eg:
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_pt=True)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_pt=True)
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased", config=config, local_files_only=True)
Always results in this error:
404 Client Error: Not Found for url: https://huggingface.co/models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index/resolve/main/tf_model.h5
...
...
OSError: Can't load weights for '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index'. Make sure that:
- '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index' is a correct model identifier listed on 'https://huggingface.co/models'
- or '../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
So my question is: How can I load this pre-trained BERT model from those files and use it in tensorflow?
You can try the following snippet to load dbmdz/bert-base-italian-xxl-cased in tensorflow.
from transformers import AutoTokenizer, TFBertModel
model_name = "dbmdz/bert-base-italian-cased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = TFBertModel.from_pretrained(model_name)
If you want to load from the given tensorflow checkpoint, you could try like this:
model = TFBertModel.from_pretrained("../../models/pretrained/bert-base-italian-xxl-cased/model.ckpt.index", config=config, from_tf=True)

How to use multiple inputs for custom Tensorflow model hosted by AWS Sagemaker

I have a trained Tensorflow model that uses two inputs to make predictions. I have successfully set up and deployed the model on AWS Sagemaker.
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data='s3://' + sagemaker_session.default_bucket()
+ '/R2-model/R2-model.tar.gz',
role = role,
framework_version = '1.12',
py_version='py2',
entry_point='train.py')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
predictor.predict([data_scaled_1.to_csv(),
data_scaled_2.to_csv()]
)
I always receive an error. I could use an AWS Lambda function, but I don't see any documentation on specifying multiple inputs to deployed models. Does anyone know how to do this?
You need to actually build a correct signature when deploying the model first.
Also, you need to deploy with tensorflow serving.
At inference, you need to also give a proper input format when requesting: basically sagemaker docker server takes the request input and passes it by to tensorflow serving. So, the input needs to match TF serving inputs.
Here is a simple example of deploying a Keras multi-input multi-output model in Tensorflow serving using Sagemaker and how to make inference afterwards:
import tarfile
from tensorflow.python.saved_model import builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model import tag_constants
from keras import backend as K
import sagemaker
#nano ~/.aws/config
#get_ipython().system('nano ~/.aws/config')
from sagemaker import get_execution_role
from sagemaker.tensorflow.serving import Model
def serialize_to_tf_and_dump(model, export_path):
"""
serialize a Keras model to TF model
:param model: compiled Keras model
:param export_path: str, The export path contains the name and the version of the model
:return:
"""
# Build the Protocol Buffer SavedModel at 'export_path'
save_model_builder = builder.SavedModelBuilder(export_path)
# Create prediction signature to be used by TensorFlow Serving Predict API
signature = predict_signature_def(
inputs={
"input_type_1": model.input[0],
"input_type_2": model.input[1],
},
outputs={
"decision_output_1": model.output[0],
"decision_output_2": model.output[1],
"decision_output_3": model.output[2]
}
)
with K.get_session() as sess:
# Save the meta graph and variables
save_model_builder.add_meta_graph_and_variables(
sess=sess, tags=[tag_constants.SERVING], signature_def_map={"serving_default": signature})
save_model_builder.save()
# instanciate model
model = ....
# convert to tf model
serialize_to_tf_and_dump(model, 'model_folder/1')
# tar tf model
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('model_folder', recursive=True)
# upload it to s3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz')
# convert to sagemaker model
role = get_execution_role()
sagemaker_model = Model(model_data = inputs,
name='DummyModel',
role = role,
framework_version = '1.12')
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.t2.medium', endpoint_name='MultiInputMultiOutputModel')
At inference, here is how to request for predictions:
import json
import boto3
x_inputs = ... # list with 2 np arrays of size (batch_size, ...)
data={
'inputs':{
"input_type_1": x[0].tolist(),
"input_type_2": x[1].tolist()
}
}
endpoint_name = 'MultiInputMultiOutputModel'
client = boto3.client('runtime.sagemaker')
response = client.invoke_endpoint(EndpointName=endpoint_name, Body=json.dumps(data), ContentType='application/json')
predictions = json.loads(response['Body'].read())
You likely need to customize the inference functions loaded in the endpoints. In the SageMaker TF SDK doc here you can find that there are two options for SageMaker TensorFlow deployment:
Python Endpoint, that is the default, check if modifying the
input_fn can accomodate your inference scheme
TF Serving
endpoint
You can diagnose error in Cloudwatch (accessible through the sagemaker endpoint UI), choose the most appropriate serving architecture among the above-mentioned two and customize the inference functions if need be
Only the TF serving endpoint supports multiple inputs in one inference request. You can follow the documentation here to deploy a TFS endpoint -
https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst

Categories