I'm in trouble with adding new elements to the config object of Keras models.
We can get the config object with the following code:
config = model.get_config()
But, I can not find any methods like model.set_condig().
I'd like to set my own elements to it.
Anyone know how to do this ?
Try:
from keras.models import model_from_config
new_model = model_from_config(your_config)
Related
I get a model from Sagemaker of type:
<class 'xgboost.core.Booster'>
I can score this locally which is great but some google searches have shown that it may not be possible to do "standard" things like this taken from here:
plt.barh(boston.feature_names, xgb.feature_importances_)
Is it possible to tranform xgboost.core.Booster to XGBRegressor? Maybe one could use the save_raw method looking at this? Thanks!
So far I tried:
xgb_reg = xgb.XGBRegressor()
xgb_reg._Boster = model
xgb_reg.feature_importances_
but this reults in:
NotFittedError: need to call fit or load_model beforehand
Something along those lines appears to work fine:
local_model_path = "model.tar.gz"
with tarfile.open(local_model_path) as tar:
tar.extractall()
model = xgb.XGBRegressor()
model.load_model(model_file_name)
model can then be used as usual - model.tar.gz is an artifcat coming from sagemaker.
I have a Python function call like so:
import torchvision
model = torchvision.models.resnet18(pretrained=configs.use_trained_models)
Which works fine.
If I attempt to make it dynamic:
import torchvision
model_name = 'resnet18'
model = torchvision.models[model_name](pretrained=configs.use_trained_models)
then it fails with:
TypeError: 'module' object is not subscriptable
Which makes sense since model is a module which exports a bunch of things, including the resnet functions:
# __init__.py for the "models" module
...
from .resnet import *
...
How can I call this function dynamically without knowing ahead of time its name (other than that I get a string with the function name)?
You can use the getattr function:
import torchvision
model_name = 'resnet18'
model = getattr(torchvision.models, model_name)(pretrained=configs.use_trained_models)
This essentially is along the lines the same as the dot notation just in function form accepting a string to retrieve the attribute/method.
The new APIs since Aug 2022 are as follows:
# returns list of available model names
torchvision.models.list_models()
# returns specified model with pretrained common weights
torchvision.models.get_model("alexnet", weights="DEFAULT")
# returns specified model with pretrained=False
torchvision.models.get_model("alexnet", weights=None)
# returns specified model with specified pretrained weights
torchvision.models.get_model("alexnet", weights=ResNet50_Weights.IMAGENET1K_V2)
Reference:
https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/
I am trying to save my configuration for a Keras model. I would like to be able to read the configuration from the file to be able to reproduce the training.
Before implementing a custom metric in a function I could just do it the way shown below without the mean_pred. Now I am running into the problem TypeError: Object of type 'function' is not JSON serializable.
Here I read that it is possible to get the function name as string by custom_metric_name = mean_pred.__name__. I would like to not only be able to save the name, but to be able to save a reference to the function if possible.
Perhaps I should as mentioned here also think about not just storing my configuration in the .py file but using ConfigObj. Unless this would solve my current problem I would implement this later.
Minimum working example of problem:
import keras.backend as K
import json
def mean_pred(y_true, y_pred):
return K.mean(y_pred)
config = {'epochs':500,
'loss':{'class':'categorical_crossentropy'},
'optimizer':'Adam',
'metrics':{'class':['accuracy', mean_pred]}
}
# Do the training etc...
config_filename = 'config.txt'
with open(config_filename, 'w') as f:
f.write(json.dumps(config))
Greatly appreciate help with this problem as well as other approaches to saving my configuration in the best way possible.
To solve my problem I saved the name of the function as a string in the config file and then extracted the function from a dictionary to use it as metrics in the model. One could additionally use: 'class':['accuracy', mean_pred.__name__] to save the name of the function as a string in the config.
This does also work for multiple custom functions and for more keys to metrics (eg. define metrics for 'reg' like 'class' when doing regression and classification).
import keras.backend as K
import json
from collections import defaultdict
def mean_pred(y_true, y_pred):
return K.mean(y_pred)
config = {'epochs':500,
'loss':{'class':'categorical_crossentropy'},
'optimizer':'Adam',
'metrics':{'class':['accuracy', 'mean_pred']}
}
custom_metrics= {'mean_pred':mean_pred}
metrics = defaultdict(list)
for metric_type, metric_functions in config['metrics'].items():
for function in metric_functions:
if function in custom_metrics.keys():
metrics[metric_type].append(custom_metrics[function])
else:
metrics[metric_type].append(function)
# Do the training, use metrics
config_filename = 'config.txt'
with open(config_filename, 'w') as f:
f.write(json.dumps(config))
I am trying host an image classification model on my machine, i was trying to implement the steps given in this article Medium serving ml models
The code snippet i used is :
import tensorflow as tf
# The export path contains the name and the version of the model
tf.keras.backend.set_learning_phase(0) # Ignore dropout at inference
model = tf.keras.models.load_model('./model_new.hdf5')
export_path = './model/1'
# Fetch the Keras session and save the model
# The signature definition is defined by the input and output tensors
# And stored with the default serving key
with tf.keras.backend.get_session() as sess:
tf.saved_model.simple_save(
sess,
export_path,
inputs={'input_image': model.input},
outputs={t.name:t for t in model.outputs})
as given in the article above. My model is stored in model_new.hdf5 file, but I am getting the following error message.
NameError: name 'tf' is not defined
in the line
model = tf.keras.models.load_model('./model_new.hdf5')
is this the right way to use tf.saved_model.simple_save() ?
This is an error with loading your model, not with tf.saved_model.simple_save(). When you load a Keras model, you need to handle custom objects or custom layers. You can do this by passing a custom_objects dict that contains tf in your case:
import tensorflow as tf
model = keras.models.load_model('model_new.hdf5', custom_objects={'tf': tf})
I have the same problem as How can I load and use a PyTorch (.pth.tar) model which does not have an accepted answer or one I can figure out how to follow the advice given.
I'm new to PyTorch. I am trying to load the pretrained PyTorch model referenced here: https://github.com/macaodha/inat_comp_2018
I'm pretty sure I am missing some glue.
# load the model
import torch
model=torch.load("iNat_2018_InceptionV3.pth.tar",map_location='cpu')
# try to get it to classify an image
imsize = 256
loader = transforms.Compose([transforms.Scale(imsize), transforms.ToTensor()])
def image_loader(image_name):
"""load image, returns cuda tensor"""
image = Image.open(image_name)
image = loader(image).float()
image = Variable(image, requires_grad=True)
image = image.unsqueeze(0)
return image.cpu() #assumes that you're using CPU
image = image_loader("test-image.jpg")
Produces the error:
in ()
----> 1 model.predict(image)
AttributeError: 'dict' object has no attribute 'predict
Problem
Your model isn't actually a model. When it is saved, it contains not only the parameters, but also other information about the model as a form somewhat similar to a dict.
Therefore, torch.load("iNat_2018_InceptionV3.pth.tar") simply returns dict, which of course does not have an attribute called predict.
model=torch.load("iNat_2018_InceptionV3.pth.tar",map_location='cpu')
type(model)
# dict
Solution
What you need to do first in this case, and in general cases, is to instantiate your desired model class, as per the official guide "Load models".
# First try
from torchvision.models import Inception3
v3 = Inception3()
v3.load_state_dict(model['state_dict']) # model that was imported in your code.
However, directly inputing the model['state_dict'] will raise some errors regarding mismatching shapes of Inception3's parameters.
It is important to know what was changed to the Inception3 after its instantiation. Luckily, you can find that in the original author's train_inat.py.
# What the author has done
model = inception_v3(pretrained=True)
model.fc = nn.Linear(2048, args.num_classes) #where args.num_classes = 8142
model.aux_logits = False
Now that we know what to change, lets make some modification to our first try.
# Second try
from torchvision.models import Inception3
v3 = Inception3()
v3.fc = nn.Linear(2048, 8142)
v3.aux_logits = False
v3.load_state_dict(model['state_dict']) # model that was imported in your code.
And there you go with successfully loaded model!