What I did:
convert the .weights to SavedModel (.pb), using tf.keras.models.save()
load the new model file in two ways below:
# 1st way: tf.saved_model.load
model = tf.saved_model.load(model_path, tags=[tag_constants.SERVING])
infer = model.signatures['serving_default']
res = infer(tf.constant(batch_data))
# 2nd way: tf.keras.models.load
model = tf.keras.models.load_model(model_path, compile=False)
res = model.predict(batch_data)
The first runs at 15 FPS, while the second runs at 10 FPS, 1.5x slower...
My ultimate goal is to output both intermediate layer outputs and final predictions. And the only (simple) way to achieve this in TF2 is by tf.keras.Modul(model.inputs, [<layer>.output, model.output]). I need that loaded model to be an keras object to implement this.
So how could I use keras way and maintain the same inference speed?
Related
I am looking for an OCR pretrained model that works with pytorch. I tried https://github.com/clovaai/CRAFT-pytorch, but it is not supported on pytorch hub. And I cannot load pth model, because it has only weights. How to load model?
My first model is a yolov5 model trained on custom data, so it should crop an image and send it to the next model. The next model should be OCR, mostly digit recognition. But I can't run craft-pytorch
model = torch.hub.load('.', 'custom', path='runs/train/exp2/weights/best.pt', source='local', force_reload=True)
# It throws an error with pytorch hub
model_ocr = torch.hub.load('clovaai/CRAFT-pytorch', 'craft_mlt_25k.pth')
# Tried with torch load, but pth files have only weights not model
ocr_model = torch.load('runs/ocr/craft_mlt_25k.pth')
cap = cv2.VideoCapture('../Dataset/test/09-10.mp4')
while(cap.isOpened()):
ret, frame = cap.read()
results = model(frame)
crops = results.crop(save=False)
for crop in crops:
if 'number' in crop['label']:
ocr_result = model_ocr(crop['im'])
ocr_crop = ocr_result.crop(save=False)
If you have the source code for the model, you can create it and load its weights, but as I understood the question, you wish to do it without the source code.
For that, you can just call torch.load:
state_dict = torch.load('model_file.pth')
state_dict should just be a (type of) dictionary. You can examine and pick it apart, as you wish:
print(state_dict.keys())
with torch.no_grad():
my_model.some_weight[:] = state_dict['some_other_name']
(Note that it's not safe to call torch.load on untrusted data)
I want to quantization-aware train with my keras model. I have tried like below. I'm using tensorflow 1.14.0
train_graph = tf.Graph()
train_sess = tf.compat.v1.Session(graph=train_graph)
tf.compat.v1.keras.backend.set_session(train_sess)
with train_graph.as_default():
tf.keras.backend.set_learning_phase(1)
model = my_keras_model()
tf.contrib.quantize.create_training_graph(input_graph = train_graph, quant_delay=5)
train_sess.run(tf.global_variables_initializer())
model.compile(...)
model.fit_generator(...)
saver = tf.compat.v1.train.Saver()
saver.save(train_sess, checkpoint_path)
It works without errors.
However, size of saved model(h5 and ckpt) is completely same as the model without quantization.
Is it the right way? How I can check whether it is quantized well?
Or, is there better way to quantize?
When you finish the quantization-aware-training and save your model to disk, it is actually not already quantized. In other words, it is "prepared" for quantization, but the weights are still float32. You have to further convert your model to TFLite for it to actually be quantized. You can do so with the following piece of code:
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
This will quantize your model with int8 weights and uint8 activations.
Have a look at the official example for further reference.
I'm new to tensorflow and the dataset APIs. looks like I'm not feeding the correct lists of dicts to the tensorflow. I get the following output:
tensorflow.python.framework.errors_impl.InvalidArgumentError: In[0] is not a matrix. Instead it has shape [] [Op:MatMul]
My code is:
train_dataset = tf.data.experimental.CsvDataset(
"train.csv",
[tf.float32,
tf.int32] )
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(1,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(1)
])
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
model.compile( optimizer,
loss='categorical_crossentropy',
metrics=['accuracy'] )
model.fit( train_dataset.make_one_shot_iterator(),
steps_per_epoch = 1,
verbose = 2 )
Would expect the model top start training.....
I have few observations:
1) As far as I understood, the problem is with the format of the data you fill in.
In the comments you said that your CSV file has a (100, 2) shape.
However, you have specified a 10 nodes input layer. So, your neural network is expecting to receive 10 variables as input, but you only feed 2 (because you only have 2 columns). This is going to return error for sure. The next problem is that it's very difficult (not to say impossible) tu implement a newural network with such a small input. If you're trying to learn, I'd suggest you to start with another dataset. The internet is full of cool datasets that are thought specifically for learning the bases of ML.
2) If you are using keras, then you don't need plain TensorFlow function (such as tf.train.GradientDescentOptimizer). You can train models just with Keras functions (which has a different syntax). You can group a lot of instructions using model.compile. I suggest you to take a look at the Keras Guide to Sequential models.
3) I strongly suggest you to import data using pandas library. You can do that with:
import pandas as pd
dataframe = pd.read_csv("path/to/dataframe.csv")
And then you convert it into a matrix with:
dataframe = dataframe.values
This turns it into a numpy array, which works perfectly for pretty much any deep learning library.
I am new to deep learning and would like to implement a simple classification task using Keras. My dataset contains over 2000 images & for each image I have a respective json file which contains the label for that image. Following is the code to load the json files & create the X (image) & Y (labels) arrays:
X = []
Y = []
with concurrent.futures.ProcessPoolExecutor() as executor:
# Get a list of files to process
str = jsonpath + '/*.json'
#print(str)
json_files = glob.glob(str)
for jsonfile,y in zip(json_files, executor.map(create_array, json_files)):
X.append(y[0])
Y.append(y[1])
where the function create_array is defined as follows:
def create_array(jsonfile):
array_list = []
y_list = []
with open(jsonfile) as f:
data = json.load(f)
name = data['annotation']['data_filename']
img = cv2.imread(imgDIR + '/' + name)
array_list.append(img)
l = data['annotation']['data_annotation']['classification'][0]['classification_label']
y_list.append(l)
return array_list, y_list
It works for small no of images say 15, but for the entire set of 2000 images, the program gets automatically killed or sometimes it gives the error "MemoryError: out of memory".
Is there an efficient way to do this? How can I speed up this data pre-processing part to give it as an input to the keras classification model?
It seems like your images are pretty much ready for training and your preprocessing is simply about loading the files. json format might not be the fastest approach when it comes to loading data. If you're using somthing like pickle to save and load your images, you might experience a speed boost.
The other question is how to efficiently passing the data to keras. Normally you would use model.fit but since not all your data can fit into your memory you can use model.fit_generator
Ther keras doc gives us the folowing hint:
The generator is run in parallel to the model, for efficiency. For
instance, this allows you to do real-time data augmentation on images
on CPU in parallel to training your model on GPU.
The use of keras.utils.Sequence guarantees the ordering and guarantees
the single use of every input per epoch when using
use_multiprocessing=True.
Here is an example how to implement such a generator.
I am using tf.contrib.learn.DNNLinearCombinedRegressor with wide and deep feature columns. I use an input_fn to send data to this regressor for training. I have both real and categorical features, and so my deep feature columns are made of sparse columns with embedding as well as real valued columns. While my linear columns are real valued columns. After training, I can naively access the trained model for prediction by using the estimator.predict_scores() method. However, to my understanding, this recreates the tensorflow graph from the checkpoint file, which is slow for my intended application. I then tried to use estimator.export_savedmodel(), to see if I can save and load the graph prior to actual prediction in my application. But I got stuck with it. I am facing the following issues:
To export, I did the following:
feature_spec = tf.feature_column.make_parse_example_spec(deep_feature_cols)
export_input_fn = tf.contrib.learn.utils.build_parsing_serving_input_fn(feature_spec)
servable_model_path = regressor.export_savedmodel(export_dir_base = servable_model_dir, serving_input_fn = export_input_fn, as_text= True)
(note that the raw and default parsing functions didn't work for me due to VarLen feature column -- the embedding column). Since I construct feature_spec with my deep columns only, I don't understand how the saved model will know what I am using for my linear columns. I don't know how to combine the two types of columns for saving/exporting.
While I was able to export the trained regressor model successfully, I couldn't figure out how to use it for predictions. I have seen tutorials for doing this using Tensorflow Serving such as this. However, I just want something simple that I load and use in python. I am working on an interactive GUI application in Windows and so don't want to use bazel to build a saved model for Tensorflow Serving. I tried the following from this post, but it didn't work with my input_fn
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(servable_model_path)
predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1, shuffle=False))
where get_input_fn is my custom input_fn. When I run this code, I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-15-128f4d1ba155> in <module>()
----> 1 predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1,
shuffle=False))
~\AppData\Local\Continuum\Anaconda3\envs\tensorflow\lib\site-
packages\tensorflow\contrib\predictor\predictor.py in __call__(self, input_dict)
63 """
64 # TODO(jamieas): make validation optional?
---> 65 input_keys = set(input_dict.keys())
66 expected_keys = set(self.feed_tensors.keys())
67 unexpected_keys = input_keys - expected_keys
AttributeError: 'function' object has no attribute 'keys'
Any input is appreciated. I am on windows 10, using Tensorflow 1.3 with Python 3.5 in Anaconda virtual environment.
Poked around a bit more and the predict_fn expects a dictionary with the key set to the exact same name as the feature variable your model expects. In my CNN model the input was "inputs" as below:
def _cnn_model_fn(features, labels, mode):
# Input Layer
print("features shape", features['inputs'])
Here's what I passed in the predict_fn where myimage is a flattened np.array.
predictions = predict_fn({"inputs": [myimage]})