Is it possible to use "reinforcement learning" or a feedback loop on a supervised model?
I have worked on a machine learning problem using a supervised learning model, more precisely a linear regression model, but I would like to improve the results by creating a feedback loop on the outputs of the prediction, i.e, tell the algorithm if it made mistakes on some examples.
As I know, this is basically how reinforcement learning works: the model learns from positive and negative feedbacks.
I found out that we can implement supervised learning and reinforcement learning algorithms using PyBrain, but I couldn't find a way to relate between both.
Most (or maybe all) iterative supervised learning methods already use a feedback loop on the outputs of the prediction. If fact, this feedback is very informative since it provides information with the exact amount of error in each sample. Think for example in stochastic gradient descent, where you compute the error of each sample to update the model parameters.
In reinforcement learning the feedback signal (i.e., reward) is much more limited than in supervised learning. Therefore, in the typical setup of adjusting some model parameters, if you have a set of input-output (i.e., a training data set), probably it has no sense to apply reinforcement learning.
If you are thinking on a more specific case/problem, you should be more specific in your question.
Reinforcement Learning has been used to tune hyper-parameters and/or select optimal Supervised Learning Models. There's also a paper on it: "Learning to optimize with Reinforcement Learning".
Reading Pablo's answer you may want to read up on "back propagation". It may be what you are looking for.
Related
Let's assume we're dealing with continuous features and responses. We fit a linear regression model (let's say first order) and after CV we get a somehow good r^2 (let's say r^2=0.8).
Why do we go for other ML algorithms? I've read some research papers and they were trying different ML algorithms and taking the simple linear model as a base model for comparison. In these papers, the linear model outperformed other algorithms, what I have difficulty understanding is why do we go for other ML algorithms then? Why can't we just be satisfied with the linear model especially in the specific case where other algorithms perform poorly?
The other question is what do they gain from presenting the other algorithms in their research papers if these algorithms performed poorly?
The Best model for solving predictive problems such as continuous output is the regression model especially if you do it using a Neural network (polynomial or linear) with hyperparameter tuning based on the problem.
Using other ML algorithms such as Decision Tree or SVM or any other model where their main goal is classification but on the paper, they say it can do regression also in fact, they can't predict any new values.
but in the field of research people always try to find a better way to predict values other than regression, like in the classification world we start with Logistic regression -> decision tree and now we have SVM and ensembling models and DeepLearning.
I think the answer is because you never know.
especially in the specific case where other algorithms perform poorly?
You know they performed poorly because someone tried dose models. It's always worthy trying various models.
I am new to machine learning, but I have decent experience in python. I am faced with a problem: I need to find a machine learning model that would work well to predict the speed of a boat given current environmental and physical conditions. I have looked into Scikit-Learn, Pytorch, and Tensorflow, but I am having trouble finding information on what type of model I should use. I am almost certain that linear regression models would be useless for this task. I have been told that non-parametric regression models would be ideal for this, but I am unable to find many in the Scikit Library. Should I be trying to use regression models at all, or should I be looking more into Neural Networks? I'm open to any suggestions, thanks in advance.
I think multi-linear regression model would work well for your case. I am assuming that the input data is just a bunch of environmental parameters and you have a boat speed corresponding to that. For such problems, regression usually works well. I would not recommend you to use neural networks unless you have a lot of training data and the size of one input data is also quite big.
I am new to reinforcement learning, and currently I am working on a small q-learning project but I am a little confused?
1- what is the testing phase of a q-learning model, and how do we make a prediction (try it on single, unseen data) with it? at this point I have created the needed function for choosing action and getting reward ...etc, and I was able to run a 10000 episodes, but I believe this is the training phase.
2- what are the metrics that we use to say that our model has learned and performed well or not? something like accuracy in classification scheme for example.
Thank you.
I ask this question because many deep learning frameworks, such as Caffe, supports model refining function. For example, in Caffe, we can use snapshot to initialling the neural network parameters and then continue performing training as the following command shows:
./caffe train -solver solver_file.prototxt -snapshot snap_file.solverstate
In order to further train the model, the following tricks I can play with:
use smaller learning rate
change optimisation method. For example, change stochastic gradient descent to ADAM algorithm
Any other tricks I can play with?
ps: I understand that reducing the loss function value of the training samples does not mean that we can get a better model.
The question is way too broad, I think. However, this is a common practice, especially in case of a small training set. I would rank possible methods like this:
smaller learning rate
more/different data augmentation
add noise to train set (related to data augmentation, indeed)
fine-tune on subset of the training set.
The very last one is indeed a very powerful method to finalize the model that performs poor on some corner cases. You can then make a 'difficult' train subset in order to bias model towards it. I personally use it very often.
Are there any Deep Learning literature/references where they performed clustering in structured data?
I know it can be done using Kmeans, GMM etc. But is there any chance that cluster analysis to be done using Deep Neural Nets and the like? Thanks.
Neural networks can be used in a clustering pipeline. For example, you can use Self-organizing maps (SOMs) for dimensionality reduction and k-means for clustering. Also, auto-encoders directly pop to my mind. But then, again, it is rather compression / dimensionality reduction than clustering. The real clustering is done by something else.
The problem with clustering is the missing optimization goal. The problem is not well-defined.
Deep learning refers to the depth of the neural nets in and the huge number of parameters applied to learn how to recognize features related to a certain object, and neural nets in essence need a loss function to learn, and the loss should be in the form of an equation that can by applying calculus give an estimate of how much each parameter we need to correct to get better result (Basically forward propogation to predict and backward propogation to update parameters), and such a loss function as of now does not exist, so we don't use neural nets for clustering. And if no neural nets, no deep learning.
If any part of that seems confusing comment below.
To read more about clustering algorithm have a look at this https://towardsdatascience.com/the-5-clustering-algorithms-data-scientists-need-to-know-a36d136ef68
Again you will find no neural nets. :)
Yes.
If you do a little bit of literature research yourself you will find that people have repeatedly published clustering with deep neural networks.
Except that it doesn't seem to work anywhere but on MNIST data...
Two other Potential Methods:
KMeans + Autoencoder (a simple deep learning architecture with
kmeans and reducing the dimensionality of the data using autoencoders).
Deep Embedded Clustering algorithm (advanced deep learning)