I have a build a ML model and exported it as pickle object file,final goal is to use this object file to make prediction in web app.
What I want to know
How to use this pickle file to predict output in a Node.js server.Is it possible?If yes,can you explain how?
Related
New to Sagemaker..
Trained a "linear-learner" classification model using the Sagemaker API, and it saved a "model.tar.gz" file in my s3 path. From what I understand SM just used an image of a scikit logreg model.
Finally, I'd like to gain access to the model object itself, so I unpacked the "model.tar.gz" file only to find another file called "model_algo-1" with no extension.
Can anyone tell me how I can find the "real" modeling object without using the inference/Endpoint delpoy API provided by Sagemaker? There are some things I want to look at manually.
Thanks,
Craig
Linear-Learner is a built in algorithm written using MX-net and the binary is also MXNET compatible. You can't use this model outside of SageMaker as there is no open source implementation for this.
Little background: I successfully ran a regression experiment on AWS and saved the best model from that experiment. I have downloaded my best model as model.tar.gz. to use it for inference on my dataset elsewhere. I extracted it and uploaded the 'xgboost-model' file into my Jupyter Lab workspace, where my dataset is.
regression_model = 'xgboost-model'
predictions = regression_model.predict(X_test)
The error I'm getting is:
----> 1 predictions = regression_model.predict(X_test)
AttributeError: 'str' object has no attribute 'predict'
I know that XGBRegressor has predict attribute, but my model doesn't seem to have it though it's exported as an xgboost model. Any suggesstions on what I'm supposed to be doing instead?
Hey so for your model data, you can use it in another notebook, but you need to make sure the dataset you're predicting on has the same attributes as the data you trained on so that you can predict accurately with the model. Second point to try out is using the boto3 invoke_endpoint call, the predict attribute is from the SageMaker Python SDK. The boto3 SDK is the general AWS Python SDK. I've attached an example of deploying your endpoint with this model.tar.gz, simply passing in the string is not adequate you need to create an endpoint and then perform inference with either the predict call or the invoke_endpoint call.
Example: https://github.com/RamVegiraju/Pre-Trained-Sklearn-SageMaker
I have trained my custom object using Yolov4 and I have the following file :
yolov4-custom_best.weights
yolov4-custom.cfg
obj.names
'obj.names' has the names of the classes of the custom object.
In order to deploy my custom object detector to a web application I need the files in a tensorflow model format (eg : object_detector.h5)
Can someone help?
To get the model files in .h5 format you have to save the model. To save the model you can use
# The '.h5' extension indicates that the model should be saved to HDF5.
model.save('my_model.h5')
You can also save the model by using tf.keras.models.save_model()
tf.keras.models.save_model(model,filepath,save_format='h5')
For more details, please refer to this documentation. Thank You!
i have LogisticRegressionCv model it's .pkl file and import data as images but i don't know how to get it on flutter please help me If you know how or if I must to convert my model to other file formats.
please help me.
Thank you for your help.
as you've trained your model in python and stored it in pkl file. One method is in your flutter background, call python3 predict_yourmodel.py your_model_params and after the run, it will give your the model result.
Another way is implement a logisticRegressionCv in Flutter as it is a simple model, and easily be implemented. you can store all your params and l1 or l2 etc super-params in a txt instead of pkl file for readility.
I have successfully trained a Convolutional neural network model using Google Colab in a file named model_prep.py. The model receives 92% accuracy. Now that i'm happy with the model I have used pyTorch to save my model.
torch.save(model, '/content/drive/MyDrive/myModel.pt')
My understanding of this is that once the model has been fully trained, I could use pyTorch to save the trained model to then be loaded into future projects for predictions on new data. Therefore I created a separate test.py file where i loaded the trained model like so,
model = torch.load('/content/drive/MyDrive/myModel.pt')
model.eval()
But within the new test.py file, I receive an error message
AttributeError: Can't get attribute 'ResNet1D' on <module '__main__'>
Although this error does not occur when loading the model in the same notebook as the trained model was created (model_prep.py). This error only occurs when loading the model into a separate notebook with no model architecture. How do I go about this problem? I would like to load the trained model into a new separate file to perform on new data. Can someone suggest a solution?
In the future, I would like to create a GUI using tkinter and deploy the trained model to check predictions using new data within the tkinter file. Is this possible?
Even I was facing the same error. What this trying to say is create an instance of your model by calling the class and then do torch.load().
If go to one of blogs on Saving and Loading Models by PyTorch and there in load section, you can clearly see this line # Model class must be defined somewhere.
Hence I would recommend, in your test.py file try to define the model class as you have done in train.py (guessing this is your filename where you have created your model) and then load as shown below.
model = ModelClass()
model = torch.load(PATH, , map_location=torch.device('cpu')) #<--- if current device is 'CPU'
model.eval() #<---- To prevent it from going to retraining mode.
As stated by Pytorch Blog (here) Saving a model in this way will save the entire module using Python’s pickle module. The disadvantage of this approach is that the serialized data is bound to the specific classes and the exact directory structure used when the model is saved. The reason for this is that pickle does not save the model class itself. Rather, it saves a path to the file containing the class, which is used during load time. Because of this, your code can break in various ways when used in other projects or after refactors.
I fixed this with the TorchScript approach. We save model with
model_scripted = torch.jit.script(model)# Export to TorchScript
model_scripted.save('model_scripted.pt') # Save
and to load it
model = torch.jit.load('model_scripted.pt')
model.eval()
More details here