I have problem with display plot
Loss and validation accuracy
I tray same code many time and give me good result with different model
But know i run i faced this problem but iam not change any thing i restart golab and creat new opennotebook but same problem , also the same code with same model before give me good plot
enter image description here
I suppose fine slution
Related
I'm currently facing hard times training a BERT-based architecture myself on GoEmotions dataset for multiclass sentiment classification, with pytorch framework.
My main problem is that accuracy is very low (average 0.40) on train/val set and does not increase across epochs, whereas loss doesn't decrease.
Screenshot to training metrics
What I can't explain to myself is how trained models in the tutorials started from an accuracy of 85% since the very first epoch, whereas mine doesn't.
Amongst my attempts, I've tried the following with no success at all:
change the architecture by adding another Linear layer
drop one-hot-encoded labels in favor of serialized labels (1, 0, 2, 0, ...)
change the learning rate and other parameters (batch size, tokenizer max length, ...)
make sure the loss function is correct, the gradients do get cleared out before updating, ...
make sure dataset is balanced
change the dataframe to another one, maybe who knows something I didn't see is wrong in the data
...and much more.
I would be VERY thankful to whoever can help me and/or give hints about this problem which is affecting my working since a week or so.
I'm pasting here:
the link to the custom functions that I've developed and used (likely the mistake is here, sigh): https://github.com/D1996B/myprojects/blob/master/scripts/BERT%20classification/BertClassification.py
the link to the colaboratory notebook with outputs, to see the steps: https://colab.research.google.com/drive/1ZMFkhA2Qagln__dc0qeVhpN4P3YjaXc4
Please do let me know whether links are not accessible.
I would like to thank prior whoever puts his head into this.
I accidentally ran a model where the training loss goes to "Cos_BN Loss", and the validation loss go to "Gen Cos_BN Loss", that means that I have two separate graphs for train and validation (for the same type of loss).
I want to see them together on the same plot. How?
NOTE: I know I can just run again the model by calling the validation loss - the same name as the training loss, BUT it took around 3 days to ran, on 3 GPUs and I really don't want to train it again all over.
Current state
Thank you
On the bottom there is a download data link that let's you save the data as a CSV file. You may need to enable it first (top left in following picture):
You can then import this e.g. using pandas and plot it to your liking using `matplotlib.
I am new to Deep Learning. I finished training a model that took 8 hours to run, but I forgot to plot the accuracy graph before closing the jupyter notebook.
I need to plot the graph, and I did save the model to my hard-disk. But how do I plot the accuracy graph of a pre-trained model? I searched online for solutions and came up empty.
Any help would be appreciated! Thanks!
What kind of framework did you use and which version? In the future problem, you may face, this information can play a key role in the way we can help you.
Unfortunately, for Pytorch/Tensorflow the model you saved is likely to be saved with only the weights of the neurons, not with its history. Once Jupyter Notebook is closed, the memory is cleaned (and with it, the data of your training history).
The only thing you can extract is the final loss/accuracy you had.
However, if you regularly saved a version of the model, you can load them and compute manually the accuracy/loss that you need. Next, you can use matplotlib to reconstruct the graph.
I understand this is probably not the answer you were looking for. However, if the hardware is yours, I would recommend you to restart training. 8h is not that much to train a model in deep learning.
I'm kinda new to the whole machine learning/deep learning field and I have a few doubts.
I was doing this tutorial on the tensorflow page, https://www.tensorflow.org/tutorials/images/classification, and everything went according to normal, but the tutorial ends at the validation loss/accuracy.
I was wondering on how I could upload a photo of my dog and put it in the model to test it, but I couldn't find an answer to that.
I tried using a few images of the training dataset to try to predict them, but the classes it predicts, along with the numbers predicted are weird, as shown here: https://imgur.com/a/45jubOh. My results are really similar to the tutorial's.
Can someone help me with that? Uploading the image and testing it and interpreting the classes along with the predicted numbers.
Thanks a lot!
Hey all I was wondering how could I connect 2 different line plots on the same graph together in matlab?
If that is not ideal then I could combine the 2 dataframes together however then I would need a way to tell it to change the color of the line plot at a certain x point?
I want to indicate were the predicated sales are on the graph. here is a Picture of what my code and graph currently look like(Red is actual/green is predicted)
Here is the link to my ipython notebook https://github.com/neil90/Learning_LinearReg/blob/master/Elantra%20Regression_Practice.ipynb
My original dataset was 50 observations(small I know), I split it into training and test. I got .72 R2 on my test set. So then I looked online to see if I could find the independent variables for 12 months after the dataset and low and behold I was able to, however(I am not sure of the accuracy). I then wanted to predict the sales with my model. Hence the green line.
That is always possible to connect two points using a single plot command both in MATLAB and in Python, such as:
P1 = df1(:,end); % last Point in The First Plot
P2 = df2(:,1); % first Point in The Second Plot
plot([P1(1,1) P2(1,1)],[P1(2,1) P2(2,1)])
But
this is not a good practice, as you said the green graph is your prediction part. Why don't you include more inputs when calculating the predicted points.
So I assume that you have a set of data, and then you divided it in two parts, a part for training and the other part for testing the learned model.
The ratio should be (70% to 30% or 80% to 20%), unless you have a validation part as well.
So after you train your model on the training part of the data, you should check the error of the model (Does it converge?). As I can see your prediction part has a huge error. It cannot even re-generate the sample it has seen before (i.e. the last point on the red graph).
So first try to plot the prediction over the whole training data, to see how accurate the learned model is. I am pretty sure that you need to reduce the learning error for your model by doing more iterations or changing the structure of the model.
Because you have not mentioned enough details most of the ideas are based are assumptions here.