I have trained a model in keras with the help of transfer learning on the top of the vgg16 model as mentioned in the blog Building powerful image classification using model using very little data.
When I saved the model using model.save() method in keras the ouput file size(in .h5) format was about 200MB.
I need to push this code in github but we can't upload a file whose size is more than 100MB.
So, is there any way so that I can save the model in a file of lesser size.
Note :: I need the model only for prediction purpose
The docs have your solution. save is optimal, but is there to easily save and load the entire state for continuous work. You need much less if I understand correctly:
Saving/loading only a model's architecture
json_string = model.to_json()
yaml_string = model.to_yaml()
Saving/loading only a model's weights
model.save_weights('my_model_weights.h5')
Both I guess are much leaner then the entire working area. You can use these to recreate the model later on, as stated in the docs (load_weights, load_from_json, etc.).
I am not aware of a way to save such a model using significantly less space.
However, if your problem is not the file size itself but the upload file size limit, you can try to split the file into several chunks that are smaller than that limit and upload those, e.g. using 7-Zip. See this link for a tutorial on how to split the file.
Related
After tuning my BERT model I want to save it to use it later (once the task is learned). I use these 2 lines to do it.
model_to_save.save_pretrained(output_dir)
tokenizer.save_pretrained(output_dir)
The problem is that it creates a +400Mb file that I cant upload to github because it is too big.
Is there a way to save the model creating a smaller file and loading it back later?
Regards
I learn object detection on windows 10 with tensorflow object detection.
I download ssd_mobilenet_v1_coco_2018_01_28.tar.gz from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
After I unzipped the ssd_mobilenet_v1_coco_2018_01_28.tar.gz file, I didn't find the pbtxt file.
Where can I find the related pbtxt file of ssd_mobilenet_v1_coco?
I know that there some pbtxt files in models-master\research\object_detection\data folder, but which file is related to ssd_mobilenet_v1_coco?
The label map is not specific to an architecture, but rather to a dataset (which classes you support, and accordingly you should set the number of channels of the classification layer).
Therefore, you simply need the label map which corresponds to coco, which is object_detection/data/mscoco_label_map.pbtxt.
Dataset is implemented in the model. Model is delivered by tar.gz or .zip. If you use pretrained basic model, then label map can be found code tree githup object_detection/data/mscoco_label_map.pbtxt as netanel-sam explains.
But if you start to train your pretrained model and add items to be detected to your dataset and start to deliver your modified model, then your must offer your label map also and there is no better way than include it to the .tar.gz or .zip. Same situation is with lite-model, because conversion from trainable model to lite often loses items from dataset. Lite-model uses also other format to labelmap than basic model. Confusing?
I have Python code that generates a deep convolutional neural network using Keras. I'm trying to save the model, but the result is gigantic (100s of MBs). I'd like to pare that down a bit to make something more manageable.
The problem is that model.save() stores (quoting the Keras FAQ):
the architecture of the model, allowing to re-create the model
the weights of the model
the training configuration (loss, optimizer)
the state of the optimizer, allowing to resume training exactly where you left off.
If I'm not doing any more training, I think I just need the first two.
I can use model.to_json() to make a JSON string of the architecture and save that off, and model.save_weights() to make a separate file containing the weights. That's about a third the size of the full model.save() result. But I'm wondering if there's some way to store these in a single self-contained file? (Short of outputting two files, zipping them together, and deleting the originals.) Alternatively, maybe there's a way to delete the training configuration and optimizer state when training is complete, so that model.save() doesn't give me something nearly so big?
Thanks.
The save function of a Model has a parameter exactly for this, called include_optimizer, setting it to false will save the model without including the optimizer state, which should lead to a much smaller HDF5 file:
model.save("something.hdf5", include_optimizer=False)
Does anyone know which function should I use if I want to use the pre-trained doc2vec models in this website https://github.com/jhlau/doc2vec?
I know we can use the Keyvectors.load_word2vec_format()to laod the word vectors from pre-trained word2vec models, but do we have a similar function to load pre-trained doc2vec models as well in gensim?
Thanks a lot.
When a model like Doc2Vec is saved with gensim's native save(), it can be reloaded with the native load() method:
model = Doc2Vec.load(filename)
Note that large internal arrays may have been saved alongside the main filename, in other filenames with extra extensions – and all those files must be kept together to re-load a fully-functional model. (You still need to specify only the main save file, and the auxiliary files will be discovered at expected names alongside it in the same directory.)
You may have other issues trying to use those pre-trained models. In particular:
as noted in the linked page, the author used a custom variant of gensim that forked off about 2 years ago; the files might not load in standard gensim, or later gensims
it's not completely clear what parameters were used to train those models (though I suppose if you succeed in loading them you could see them as properties in the model), and how much meta-optimization was used for which purposes, and whether those purposes will match your own project
if the parameters are as shown in one of the repo files, [train_model.py][1], some are inconsistent with best practices (a min_count=1 is usually bad for Doc2Vec) or apparent model-size (a mere 1.4GB model couldn't hold 300-dimensional vectors for all of the millions of documents or word-tokens in 2015 Wikipedia)
I would highly recommend training your own model, on a corpus you understand, with recent code, and using metaparameters optimized for your own purposes.
Try this:
import gensim.models as g
model="model_folder/doc2vec.bin" #point to downloaded pre-trained doc2vec model
#load model
m = g.Doc2Vec.load(model)
I am using Tensorflow and Inception v3 CNN.
I want to save the trained network using the freeze_graph.py
What I have done till now is saving the model in input_graph.pb during training, by using tf.write_graph, and my thoughts are that this should be the “input graph” in the FLAGS.
When it comes to the remaining fields, such as input saver, output nodes and initializer nodes, I`m not sure how to create the proper file formats?
Another question is how to use the created .meta file from the training?
I would really appreciate if you could take some time to help me.