AttributeError: 'Functional' object has no attribute 'predict_classes' - python

in google colaboratory using python, I am trying to load model to classify
I am trying to load keras model using python to classify image, I am getting above error

predict_classes is only available for the Sequential class.
With the Model class, you can use the predict method which will give you a vector of probabilities and then get the argmax of this vector (with np.argmax(y_pred1,axis=1)).

Related

I wonder if model.tar.gz from Sagemaker can be used for inference in another notebook?

Little background: I successfully ran a regression experiment on AWS and saved the best model from that experiment. I have downloaded my best model as model.tar.gz. to use it for inference on my dataset elsewhere. I extracted it and uploaded the 'xgboost-model' file into my Jupyter Lab workspace, where my dataset is.
regression_model = 'xgboost-model'
predictions = regression_model.predict(X_test)
The error I'm getting is:
----> 1 predictions = regression_model.predict(X_test)
AttributeError: 'str' object has no attribute 'predict'
I know that XGBRegressor has predict attribute, but my model doesn't seem to have it though it's exported as an xgboost model. Any suggesstions on what I'm supposed to be doing instead?
Hey so for your model data, you can use it in another notebook, but you need to make sure the dataset you're predicting on has the same attributes as the data you trained on so that you can predict accurately with the model. Second point to try out is using the boto3 invoke_endpoint call, the predict attribute is from the SageMaker Python SDK. The boto3 SDK is the general AWS Python SDK. I've attached an example of deploying your endpoint with this model.tar.gz, simply passing in the string is not adequate you need to create an endpoint and then perform inference with either the predict call or the invoke_endpoint call.
Example: https://github.com/RamVegiraju/Pre-Trained-Sklearn-SageMaker

How to load a tensorflow keras model saved with saved_model to use the predict function?

I have a keras sequential model. I have saved that model using the command.
tf.keras.models.save_model(model, 'model')
Now it has the following folder structure,
Now I am loading the model using
model = tf.saved_model.load('model')
I also tried with
model = tf.keras.models.load_model('model')
then I am trying to predict using
model.predict(padded_seq, verbose=0)
it is giving me error
AttributeError: '_UserObject' object has no attribute 'predict'
how to use the predict on the model loaded. I have tried with h5 model, it worked fine. But my main use is with this kind of model which is throwing error.
You are using the incorrect function to load your model (tf.saved_model.load); It does not return a Keras object (from the docs):
The object returned by tf.saved_model.load is not a Keras object (i.e. doesn't have .fit, .predict, etc. methods).
You should be using tf.keras.models.load_model to load a Keras model.
I have encountered the same problem with SavedModel models downloaded from TFHUB (example: InceptionV3), even loading it with tf.keras.models.load_model() returns a plain model (a sort of a basic generic model to allow back-compatibility) that does not have keras API (predict, fit, summary, build, etc) on top of it, the object type is: <tensorflow.python.saved_model.load.Loader._recreate_base_user_object.<locals>._UserObject object at 0x14a42ac2bcf8>
If you want to use just the inference call (predict), you can call your model directly on data (__call__ method is defined) as follow:
model(padded_seq) # or model.__call__(padded_seq)
One workaround I have found to get the Keras API again is wrapping it inside a KerasLayer in a Sequential model as follow:
import tensorflow as tf
import tensorflow_hub as hub
model = tf.keras.Sequential([
hub.KerasLayer("saved/model/path")
])
model.build(<input_shape>)
Now the model supports all Keras API like predict, summary, etc, and this now should work:
model.predict(padded_seq, verbose=0)

Unable to save tensorflow model containing the DNNRegressor estimator

I am a beginner in Machine Learning, and I've got a small doubt!
I have been working on a machine learning code, where I am using tensorflow for predictions of future values. Now, the code uses a dataset which implements One Hot Encoding initially, then the OHE columns are combined and the estimator function is created. For the estimator model, I have used DNN Regressor. There is no Keras used anywhere in the code.
model=tf.estimator.DNNRegressor(hidden_units=[100,100,100],feature_columns=feature_columns,
optimizer=tf.optimizers.Adam(learning_rate=0.01),
activation_fn=tf.nn.relu )
Now, I tried using Pickle for saving. However, I get this error:
AttributeError: Can't pickle local object 'DNNRegressorV2.__init__.<locals>._model_fn'
I tried the same using joblib, but I got the following issue:
PicklingError: Can't pickle <function DNNRegressorV2.__init__.<locals>._model_fn at 0x00000175F1A0E948>: it's not found as tensorflow_estimator.python.estimator.canned.dnn.DNNRegressorV2.__init__.<locals>._model_fn
Following this, I tried this code
tf.keras.models.save_model( model, filepath, overwrite=True,
include_optimizer=True, save_format=None, signatures=None, options=None )
But I got the error:
AttributeError: 'DNNRegressorV2' object has no attribute 'built'
I have also tried other methods such as model.save(), model.to_json(), and also tried saving using API - none of them worked out.
Can someone help me out with the same?

How to convert Scikit Learn OneVsRestClassifier predict method output to dense array for google cloud ML?

I have a model that I've trained using a sklearn Pipeline and the OneVsRestClassifier that I'm trying to deploy to Cloud ML Engine, however when I use the command:
gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances $INPUT_FILE
I receive the error:
{
"error": "Prediction failed: Bad output type returned.The predict function should return either a numpy ndarray or a list."
}
This leads me to believe it is the fact that the OneVsRestClassifier's predict method output is a sparse matrix, when it should be a numpy array. How can I convert it's output to a dense array in my Pipeline?
The pipeline's architecture looks like this:
Pipeline([('tfidf', tfidf), ('clf', OneVsRestClassifier(XGBClassifier())])
Thanks!
I've tried using the methods here (Google Cloud ML-engine scikit-learn prediction probability 'predict_proba()') to overwrite the OneVsRestClassifier's predict method with it's predict_proba method, however this results in the following error when I try and pickle the new pipeline:
PicklingError: Can't pickle <function OneVsRestClassifier.predict_proba at 0x10a8f9d08>: it's not the same object as sklearn.multiclass.OneVsRestClassifier.predict_proba
AI Platform (formerly known as Cloud Machine Learning Engine) serves your model and expects the input and the output to be json-serializable. If your model returns a sparse matrix, then you need to convert it to a dense matrix (see this for more information).
If you choose to overwrite predict_proba, then you are deploying your model with some custom code (your code that overwrites the function). You will then need to package up your custom code and pass it alongside your model when you deploy your model. For more information on how to deploy models with custom code, please visit Custom prediction routines on AI Platform.

Couldn't Train Tensor Object Detection API Models

I am using tensorflow's object detection API. I successfully trained with 'ssd_mobilenet_v1_coco_2017_11_17' model, later I moved to another model from the given models but while training process starts it showing the error:
"TypeError: Expected int32, got range <0,3> of type 'range' instead".
This error was thrown for all other models other than the ssd_mobilenet_v1_coco_2017_11_17.
I used 300*300 size images for training for all the models.
Here I attached images of the command prompt window showing the error message.I use tensorflow version is 1.5 and python 3.6.
Please modify
line 154 : tf.constant(range(num_boundaries), dtype=tf.int32),
to tf.constant(list(range(num_boundaries)), dtype=tf.int32)
For further reference :
https://github.com/tensorflow/models/issues/3443

Categories