I am trying to implement RayTune hyperparameter optimization for an LSTM model in Pytorch, but i really struggle to get in touch with it. I googled my question but I didn't find any usefull articles. The LSTM I am trying to use is for timeseries forecasting, and all that I could found on the internet was for the image classification, as in their example on Pytorch.
Any usefull resources that might help me will be much apreciatted. Thanks!
Related
I am new to machine learning, but I have decent experience in python. I am faced with a problem: I need to find a machine learning model that would work well to predict the speed of a boat given current environmental and physical conditions. I have looked into Scikit-Learn, Pytorch, and Tensorflow, but I am having trouble finding information on what type of model I should use. I am almost certain that linear regression models would be useless for this task. I have been told that non-parametric regression models would be ideal for this, but I am unable to find many in the Scikit Library. Should I be trying to use regression models at all, or should I be looking more into Neural Networks? I'm open to any suggestions, thanks in advance.
I think multi-linear regression model would work well for your case. I am assuming that the input data is just a bunch of environmental parameters and you have a boat speed corresponding to that. For such problems, regression usually works well. I would not recommend you to use neural networks unless you have a lot of training data and the size of one input data is also quite big.
I am a beginner in JAX and I am trying to learn how to train a neural network. I saw some blogs, but as I understood there isn't a library that you can train it easily, like 'fit' as in sklearn. I am interested about classification task, could you please reccommend me any blogs in order to adopt his/her algorithm into my problem?
JAX is an array manipulation library, not a deep learning library: in that respect, you should think of it as more similar to NumPy than similar to scikit-learn. If you want neural networks built on JAX, there are several other good projects available such as haiku and flax.
I started experimenting with gans and on the internet, there are a lot of options the thing I worry about now is which one should I use Keras fit or Keras train_on_batchs which one is the proper way to train the model thanks
According to tensorflow's documentation, the combination of tf.data & model.train_on_batch() is the fastest way to train a model.
You can check more about tf.data in the link below:
https://www.tensorflow.org/guide/data
I have already implemented GANs with keras. My implementations might help you get an idea how to train those models:
https://github.com/kochlisGit/Keras-GAN
I am following this tutorial:
https://www.tensorflow.org/tutorials/keras/text_classification_with_hub
It only goes up to fitting the model but I couldn't find how to use the model on a new dataset to classify unlabeled data. I tried following other tutorials but I couldn't get them to work since they might not be text based.
model.add(tf.keras.layers.Dense(1))
I run into an issue where I try to set the layer to 2 for positive, negative but that doesn't work either.
I think you misunderstood the purpose of that tutorial. That tutorial is applying the use of what is known as "transfer learning". Transfer Learning is when you take an already trained model, and train it with other data. What you are doing is creating an entirely new model, which is not the purpose of that specific tutorial. Furthermore, for that model you need a labeled dataset, which is provided in the tutorial using the Tensorflow Datasets library. To accomplish what you are trying to do, you must look at a different tutorial explaining how to train an LSTM model for text classification from scratch.
I have a model which is trained on the large training corpus. I also have a feedback loop which is providing me the feedback from the users. Model is built on top of Theano and Python.
How can I add this feedback into my model? Right now I am thinking about two approaches :
Add mini-batch to the training corpus and training it again. This is straight forward but it will take a lot of time to train.
Use the saved state of trained model and just train on the mini-batch. This looks promising but right now stuck in how to do it with Theano.
Can someone help me for the second case?