I'm trying to set up a pre-trained model from huggingface. This is the start of my code:
from transformers import pipeline, TFAutoModelForSequenceClassification
model_name = "microsoft/deberta-base-mnli"
text_classifier = pipeline("text-classification", model=model_name)
model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, ignore_mismatched_sizes=True, from_pt=True)
But already for this last line I get the following error:
Traceback (most recent call last):
File "/home/liv/PycharmProjects/ZSTC_for_MHC/models/baselines/single_task.py", line 29, in <module>
model = TFAutoModelForSequenceClassification.from_pretrained(model_name, num_labels=2, ignore_mismatched_sizes=True,
File "/home/liv/PycharmProjects/ZSTC_for_MHC/.venv/lib/python3.9/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/liv/PycharmProjects/ZSTC_for_MHC/.venv/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 1972, in from_pretrained
return load_pytorch_checkpoint_in_tf2_model(model, resolved_archive_file, allow_missing_keys=True)
File "/home/liv/PycharmProjects/ZSTC_for_MHC/.venv/lib/python3.9/site-packages/transformers/modeling_tf_pytorch_utils.py", line 122, in load_pytorch_checkpoint_in_tf2_model
logger.info(f"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters")
File "/home/liv/PycharmProjects/ZSTC_for_MHC/.venv/lib/python3.9/site-packages/transformers/modeling_tf_pytorch_utils.py", line 122, in <genexpr>
logger.info(f"PyTorch checkpoint contains {sum(t.numel() for t in pt_state_dict.values()):,} parameters")
AttributeError: 'dict' object has no attribute 'numel'
I'm not calling numel myself and I don't even know which dictionary. Furthermore, I want to use TensorFlow for this, so I don't know if it's alright that it shows a PyTorch error. Anyone know how to fix this? Google didn't provide any solutions.
Related
I am trying to load an XGBClassifier in my streamlit app from a pickle file.
When I load it and try to predict on the new input values, it throws the error:
XGBoostError: [11:25:40] c:\users\administrator\workspace\xgboost-win64_release_1.6.0\src\data\array_interface.h:462: Unicode-3 is not supported.
The entire traceback is:
2022-07-02 11:25:40.046 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\\Anaconda3\lib\site-packages\streamlit\scriptrunner\script_runner.py", line 554, in _run_script
exec(code, module.__dict__)
File "temp.py", line 250, in <module>
st.write(clf.predict(feat_list))
File "C:\Users\\Anaconda3\lib\site-packages\xgboost\sklearn.py", line 1434, in predict
class_probs = super().predict(
File "C:\Users\\Anaconda3\lib\site-packages\xgboost\sklearn.py", line 1049, in predict
predts = self.get_booster().inplace_predict(
File "C:\Users\\Anaconda3\lib\site-packages\xgboost\core.py", line 2102, in inplace_predict
_check_call(
File "C:\Users\\Anaconda3\lib\site-packages\xgboost\core.py", line 203, in _check_call
raise XGBoostError(py_str(_LIB.XGBGetLastError()))
xgboost.core.XGBoostError: [11:25:40] c:\users\administrator\workspace\xgboost-win64_release_1.6.0\src\data\array_interface.h:462: Unicode-3
is not supported.
I load the model this way:
clf = pickle.load(open('xgb.pkl', "rb"))
Or
clf = xgboost.XGBClassifier(tree_method ="hist", enable_categorical=True)
clf.load_model("model.json")
And I predict using:
clf.predict(feat_list)
I had a similar problem which came along with the the same XGBoostError. In my case the reason was the dtype of ndarray, which was supposed to be object.
Assuming that your feat_list is numpy.ndarray and that you create it in such way:
feat_list = np.array(features)
adding dtype=object:
feat_list = np.array(features, dtype=object)
should do the trick.
I am saving and loading a model using torch.save() and torch.load() commands.
While loading a fine-tuned simple transformer model in Docker Container, I am facing this error which I am not able to resolve:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 594, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 853, in _load
result = unpickler.load()
File "/usr/local/lib/python3.7/dist-packages/transformers/models/xlm_roberta/tokenization_xlm_roberta.py", line 161, in __setstate__
self.sp_model.Load(self.vocab_file)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/usr/local/lib/python3.7/dist-packages/sentencepiece.py", line 177, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "/home/jupyter/.cache/huggingface/transformers/9df9ae4442348b73950203b63d1b8ed2d18eba68921872aee0c3a9d05b9673c6.00628a9eeb8baf4080d44a0abe9fe8057893de20c7cb6e6423cddbf452f7d4d8": No such file or directory Error #2
If anyone has any idea about it, please let me know.
I am using:
torch ==1.7.1+cu101
sentence-transformers 0.3.9
simpletransformers 0.51.15
transformers 4.4.2
tensorflow 2.2.0
I suggest using state_dict objects - the Python dictionaries as they can be easily saved, updated and restored giving you a flexibility for restoring the model later. Here are the recommended Save/Load methods for saving models with state_dict:
Save
torch.save(model.state_dict(), PATH)
Load
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
I got the following error when trying to load a ResNet50 model. Where should I download the resnet50.h5 file?
Traceback (most recent call last):
File "C:\Users\drlng\Desktop\image-captioning-keras-resnet-main\app.py", line 61, in <module>
resnet = load_model('resnet.h5')
File "C:\Users\drlng\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\keras\saving\save.py", line 211, in load_model
loader_impl.parse_saved_model(filepath)
File "C:\Users\drlng\AppData\Local\Programs\Python\Python38\lib\site-packages\tensorflow\python\saved_model\loader_impl.py", line 111, in parse_saved_model
raise IOError("SavedModel file does not exist at: %s/{%s|%s}" %
OSError: SavedModel file does not exist at: resnet.h5/{saved_model.pbtxt|saved_model.pb}
I use resnet50.py for making my model
and read weight of resnet50 from below link:
weights best!
you can download pre train models
It works well
If you are looking for pre-trained weights of ResNet-50, you can find it here
I have a checkpoint file saved after training a model in Pytorch. I have to inspect it in a different module so I tried to load the checkpoint using the following code.
map_location = lambda storage, loc: storage
checkpoint = torch.load("model.pt", map_location=map_location)
But it is raising ModuleNotFoundError issue, which I couldn't find a way to resolve.
The error traceback :
Traceback (most recent call last):
File "main.py", line 11, in <module>
model = loadmodel(hook_feature)
File "/home/../model_loader.py", line 21, in loadmodel
checkpoint = torch.load(settings.MODEL_FILE, map_location=map_location)
File "/home/../.conda/envs/envreporting/lib/python3.6/site-packages/torch/serialization.py", line 584, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/../.conda/envs/envreporting/lib/python3.6/site-packages/torch/serialization.py", line 842, in _load
result = unpickler.load()
ModuleNotFoundError: No module named 'parse_config'
I couldn't find an already existing issue relevant to this one.
Is it possible that you have used https://github.com/victoresque/pytorch-template for training the model ? In that case, the project also saves its config in the checkpoint and you also need to import parse_config.py file in order to load it.
I trained a NER from scratch with custom dataset without any other component in pipeline. I loaded the model and added another component(Phrase Matcher) to existing pipeline and trained NER again.Now I cant load the new saved model.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/deusxmachine/.local/lib/python2.7/site-packages/spacy/__init__.py", line 19, in load
return util.load_model(name, **overrides)
File "/home/deusxmachine/.local/lib/python2.7/site-packages/spacy/util.py", line 116, in load_model
return load_model_from_path(Path(name), **overrides)
File "/home/deusxmachine/.local/lib/python2.7/site-packages/spacy/util.py", line 156, in load_model_from_path
component = nlp.create_pipe(name, config=config)
File "/home/deusxmachine/.local/lib/python2.7/site-packages/spacy/language.py", line 215, in create_pipe
raise KeyError("Can't find factory for '{}'.".format(name))
KeyError: u"Can't find factory for 'my_component'."`