I am trying to fit a deep learning model for segmentation task (tensorflow), but i am continuously facing this error.
Related
Link of code
https://colab.research.google.com/drive/1MyACfSmaaZ0Gnfy3NgOm2kltvdMheWZP?usp=sharing
Error when i run model in history statement
Image what error coming](https://i.stack.imgur.com/y1L8Z.png)
I didn't understand why error coming how to resolve it.
I am working on an object detector using Tensorflow Object Detection API whereby I downloaded a model from the model zoo repository, specifically ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8 to train my custom dataset on which contains 2500 images split into 70:30 training and testing ratio. I have edited the pipeline.config file accordingly following a tutorial, where I added the label_maps paths and other requirements. I have trained this model for 50,000 steps and monitored the training process on tensorboard which displayed good training progress results. Now the issue occurred when I went to evaluate the model. For some reason, the evaluation results are the following:
What could possibly be the issue, since the loss graphs seem to be good and even the learning rate graphs? Thanks
I am building a model for semantic segmentation using kaggle and keras, and no matter what the model complexity is, just building the model itself is taking 15.1GB of GPU space and therefore I don't have any space to actually load any images even while using a generator. I always get an OOM error. Am I doing something wrong? Why does keras take 15GB just to create the model?
I have not yet compiled the model. This is right after just building the model [model = Model(img_input, o)]
I am trying to use ssd_inception_v2_coco pre-trained model from Tensorflow API by training it with a single class dataset and also applying Transfer Learning. I trained the net for around 20k steps(total loss around 1) and using the checkpoint data, I created the inference_graph.pb and used it in the detection code.
To my surprise, when I tested the net with the training data the graph is not able to detect even 1 out of 11 cases (0/11). I am lost in finding the issue.
What might be the possibile mistake?.
P.S : I am not able to run train.py and eval.py at the same time, due to memory issues. So, I don't have info about precision from tensorboard
Has anyone faced similar kind of issue?
I am working on License plate detector and I came across dlib and I used it's wrapper in python to train a model and I stored trained model in trainedModel.svm file so that I can use it for testing, without training the model again.
And now I want to use This trainedModel.svm in C++ (dlib library)
And The problem, I am facing is that of Accuracy , I mean in python the trained model is giving me better results then I C++, But Why ?? I don't understand this, because I am using same model for testing then Why different results?
Please anyone tell me the reason behind this accuracy difference and how can I increase It's accuracy in C++ as it is python.