I am new to deep learning model training and predictions, and I have a conceptual question about how to make predictions after training. I am following this tutorial on Github: https://github.com/jpark621/language-style-transfer. I have successfully completed training on my datasets, and a model was created in the models/ directory. The model contains trained weights, the loss of two autoencoders, and the adversarial loss.
My question is that now that I have these .meta, .index, and .data files, how do I make predictions with the model by providing text as input? The output would also be text (with the writing style modified by the Autoencoder model).
Any tips/resources/pointers would be helpful! Thanks so much.
Related
I was wondering if it is useful to train a pre-trained resnet (pre-trained with imagenet) with images that are closer to my classification problem. I want to use 50,000 labeled images of trees from a paper to update the weights of the pre-trained resnet. Then I would like to use these weights to re-train and evaluate the resnet, hopefully better fitted this way, with my own set of images of trees.
I already used the pre-trained resnet on my own images with moderate success. Due to the small dataset size (~5,000 imagery) I thought it might be smart to further train the pre-trained resnet with more similar data.
Any suggestions or experiences you want to share?
I have a project with Fashion MNIST, which predicts clothes from uploaded images, and I want to make some improvements with it. Is it possible to modify my project that it will train automatically after each uploaded and prediction?
You can train your model manually by using the transfer learning technique(Transfer learning is a method of reusing an already trained model for another task).
Instantiate a base model and load pre-trained weights into it.
Freeze all layers in the base model by setting trainable = False.
Create a new model on top of the output of one (or several) layers
from the base model. Train your new model on your new dataset.
Please refer to this gist for working code example. Thank You.
I have 600 000 images and I want to classify them using keras. I am just trying the pretrained model on greyscale images. And I am trying to use the model architecture of pre-trained models like resnet50, inceptionv3, etc. But accuracy and validation accuracy of the model has not changed which is stuck at 67%. I tried changing the network, applying more epochs and also changing the pretrained model, but I always get the same result like 67% accuracy and validation accuracy. I don't understand why I am getting the same result. Please recommended some ideas on how can I solve this problem. This is my code. In this steps_per_epochs = no. of images/batch size and batch size is 128.No of images in the training dataset is 479369 and in the validation dataset is 136962.This is the output of the code.
I think you are using a pre-trained model. so, that is why it gets showed the same accuracy. my suggestion is to change the pre-trained model and tryna your custom model and then see the changes.
I am working on a project involving neural machine translation (translating English to French).
I have worked through some examples online, and have now finished the model. Once a model is trained using Keras, how do I then get a prediction of a translation without training the entire model again, because with the large dataset I am using, each epoch takes some time and of course, I can't train the model every time I want a translation.
So what is the correct way of then generating predictions on new inputs without training the whole model again?
Thanks
You need to save your model the model and its weights when the fit ends using :
keras.model.save(model_name)
At any time, you can load your trained model using
model = keras.load(model_name)
then perform predictions as
y_pred = model.predict(x_test)
Hope this will be helpful
You can use the .predict() function which you can pass new inputs into it and it give you a prediction. The docs for this function are here: keras
I have already trained a deep learning model with some data and it performs well with the test data. Now, how do I retrain this model when I get new data?
You can Save your model using
keras.model.save(yourModel, 'fileName.hdf5')
After you got the data you can load your saved model
model = keras.model.load_model('fileName.hdf5')
model.fit()
The training will continue from last saved weights, optimizer and loss.