How to predict future candlestick based on trained model - python

Let's assume we have trained a model using Keras with more than 90% accuracy.
We have used past data (open price, high, low, close, volume, etc.) with 80-20 train/test split ratio.
The problem here is that we used the data which already exists to predict the data which again already exists.
How can we use this model to predict the future? For example, using a trained model to predict red or green candlesticks for the next 4 hours?
I know we can use model.save then load the model and finally use model.predict() but the problem here is that model.predict() needs some input data for making predictions. Can we use timestamp here as an input (where we don't have the future OHLCV data obviously)?

Your training data and input data should be in the same format.
For instance if you trained your model with the previous days open, high, etc. data to predict todays data, just input todays data to predict tomorrows.

Related

Python - How to use fitted ARIMA model on unseen data

I am using statsmodels.tsa.arima.model.ARIMA to fit an ARIMA model on a timeseries.
How can I use this model to make predictions on unseen data? It seems that the predict and forecast function can only make predictions from the last seen data in the training set that model was fitted to.
So for instance, I want to use a static model to keep making prediction into the future. This is for the purpose of real time multi step forecasting where re-fitting the model isn't an option.
E.g.,
Say we have a dataset size of 10,000 split into train and test (70/30).
The last reading we train on is 7,000
Is it possible to, say, use the trained model and pass in 6997 to 7000 to predict 7001 to 7004
And then in the following iteration pass it 6998 to 7001 to predict 7002 to 7005 using the same model.
This type of prediction is common in ML workflow, but not apparent to me how to perform this in ARIMA.
Predict and forecast functions only ask for indices parameters, but there is no parameter for fresh data.
You can easily do it with the predict method which was created for this purpose. You first train you ARIMA model on all of you data (without splits). When generating forecasts you use the predict method and set the start and end parameter, e.g. when you want to predict 7001 to 7004 like this:
model.predict(start=7000, end=7004)
The predict method will use all the data available to the start point (including that one) and then make a prediction. That way you do not have to train you model again and again with new data.
The start/end parameter also works with datetime or strings (like "2021-06-30" to "2021-07-31").
https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima.model.ARIMAResults.predict.html

Tensorflow continous training from input data

I‘m new to tensorflow have a general question:
I have a certain amount of training data and want to do a time series prediction.
The intervals on my training data is one minute and I want to do a prediction for the following minutes based on the new input data which is provided via a REST API
What I don‘t understand is this:
Let‘s say I train the model with all the data till yesterday and this means I can predict the first values of today by a certain amount. But the new values of today have not been observed by the model that has been build yesterday.
How would you solve this problem?
Thanks
Djboblo,
I assume that you need to predict whole next day values on per minute basis.
In that case your options are:
recursive prediction, i.e. using predicted data as input for the next prediction
structuring the model to provide you with prediction for the whole next day
If it is just a matter of predicting for a single minute forward and your model is trained on reasonably large amount of data - don't worry, just feed it with the values up to the prediction minute. Periodically you may re-train the model using new data.
What I was looking for is this:
How to use a Keras RNN model to forecast for future dates or events?
to predict stateful events
and after a while use the .fit Method to update the network with new data
See: https://machinelearningmastery.com/update-lstm-networks-training-time-series-forecasting/

How to partial training on the additional data for pre-trained model?

In my case, I would like to weekly tune/adjust the model parameters value.
I have pre-trained the model by using the 100K data rows, Keras, and saved the model.
Then, as the new data collection (10K data rows), I need to tune the model parameter but don't want to retrain the whole dataset (110K).
How can I just partially fit the data on the model? load model -> model.fit(10K_data)?
Yes, that is correct you will train only on the new dataset (10k) model.fit(10K_data). I will recommend to change the learning rate for the retraining (reducing the learning rate) as you will just want to do a minor update to the parameters while keeping the earlier learning intact (or trying to leavarage the earlier learning).

How to predict the given value is anomaly or not with trained pyod.LOCI and Machine learning models?

I used the pyod.loci model to detect outliers. Trained the model with 100 records data.
I see most of the outliers detector models find the outliers with in the data only. I want to use the same trained model to predict the unseen data point whether it is anomaly or not?.
Can some one help me with an idea or solution? Model should work for individual or multiple column data as well.

How to continuously train our pre-trained model on real time data?

I have some sensors which fetch data from cement factory and sends data to AWS IoT. The data is then tested on pre-trained model and the model predicts quality of cement based on some parameters. The data is coming in one second interval.
Since the data is coming in real-time, I want to train the model incrementally in real time.
Can anybody suggest how train model continuously?
You could aggregate certain numbers of training data and then use .partial_fit() to update your model.
.partial_fit() is the incremental learning option, which is available in Sklearn.
If your incremental data would not fit in RAM, then its worth trying dask-ml wrapper for incremental learning.

Categories