I can't create the body of ResNet18 with fastai - python

I'm trying to build the body of the ResNet18 in this code:
from fastai.vision.data import create_body
from fastai.vision import models
from torchvision.models.resnet import resnet18
from fastai.vision.models.unet import DynamicUnet
import torch
def build_res_unet(n_input=1, n_output=2, size=256):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
body = create_body(resnet18, n_in = n_input, pretrained=True, cut=-2)
net_G = DynamicUnet(body, n_output, (size, size)).to(device)
return net_G
net_G = build_res_unet(n_input=1, n_output=2, size=256)
but I keep getting an error:
TypeError: create_body() got an unexpected keyword argument 'n_in'
but in the fastai docs the n_in parameter is present.
How can I create the body, am I missing something?

I tested the code on my local machine and it runs perfectly, maybe there are some problems on Google Colab! I will update this answer if I found a way to make it run on colab
EDIT: I solved the problem by adding !pip install fastai==2.4 on google colab, the version used by colab was very old

Related

tokenizer.push_to_hub(repo_name) is not working

I'm trying to puch my tokonizer to my huggingface repo...
it consist of the model vocab.Json (I'm making a speech recognition model)
My code:
vocab_dict["|"] = vocab_dict[" "]
del vocab_dict[" "]
vocab_dict["[UNK]"] = len(vocab_dict)
vocab_dict["[PAD]"] = len(vocab_dict)
len(vocab_dict)
import json
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
from transformers import Wav2Vec2CTCTokenizer
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("./", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
from huggingface_hub import login
login('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')
repo_name = "Foxasdf/ArabicTextToSpeech"
add_to_git_credential=True
tokenizer.push_to_hub(repo_name)
the tokenizer.push_to_hub(repo_name) is giving me this error:
TypeError: create_repo() got an unexpected keyword argument 'organization'
I have logged in my huggingface account using
from huggingface_hub import notebook_login
notebook_login()
but the error is still the same..
here's a link of the my collab notebook you can see the full code there and the error: https://colab.research.google.com/drive/11tkQ85SfaT6U_1PXDNwk0Q6qogw2r2sw?hl=ar&hl=en&authuser=0#scrollTo=WkbZ_Wcidq8Z
I have the same problem. It is somehow associated with version of transformers - I have 4.6. When I change environment to the one with 4.11.3 transformers version the problem is that code tries to clone repository which I am going to yet create and there is an error " Remote repository not found ... "
Checked more and it looks like issue with version with huggingface_hub library - when it is downgraded to 0.10.1 it should work.

Could not find class for TF Ops: TensorListFromTensor when I'm trying to import a trained model with Tensorflow in DeepLearning4j

I'm new to Tensorflow and I'm trying to import a frozen graph (.pb file) that was trained in Python into a Java project using Deeplearning4j.
It seems that the model was saved successfully and it is working in Python, but when I try to import it with DL4J I'm getting the following issue and I don't know why:
Exception in thread "main" java.lang.IllegalStateException: Could not find class for TF Ops: TensorListFromTensor
at org.nd4j.common.base.Preconditions.throwStateEx(Preconditions.java:639)
at org.nd4j.common.base.Preconditions.checkState(Preconditions.java:301)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:283)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:141)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:87)
at org.nd4j.imports.graphmapper.tf.TFGraphMapper.importGraph(TFGraphMapper.java:73)
at MLModel.loadModel(MLModel.java:30)
This is my model in Python:
def RNN():
inputs = tf.keras.layers.Input(name='inputs',shape=[max_len])
layer = tf.keras.layers.Embedding(max_words,50,input_length=max_len)(inputs)
layer = tf.keras.layers.LSTM(64)(layer)
layer = tf.keras.layers.Dense(256,name='FC1')(layer)
layer = tf.keras.layers.Activation('relu')(layer)
layer = tf.keras.layers.Dropout(0.5)(layer)
layer = tf.keras.layers.Dense(12,name='out_layer')(layer)
layer = tf.keras.layers.Activation('softmax')(layer)
model = tf.keras.models.Model(inputs=inputs,outputs=layer)
return model
Actually I based on this blog how to export the model: Save, Load and Inference From TensorFlow 2.x Frozen Graph
And this is how I'm trying to import the model in Java with DeepLearning4J:
public static void loadModel(String filepath) throws Exception{
File file = new File(filepath);
if (!file.exists()){
file = new File(filepath);
}
sd = TFGraphMapper.importGraph(file);
if (sd == null) {
throw new Exception("Error loading model : " + file);
}
}
I'm getting the exception in sd = TFGraphMapper.importGraph(file);
Does anyone know if I'm missing something?
That is the old model import. Please use the new one. The old one is not and will not be supported. You can find that here:
https://deeplearning4j.konduit.ai/samediff/explanation/model-import-framework
Both tensorflow and onnx work similarly. For tensorflow use:
//create the framework importer
TensorflowFrameworkImporter tensorflowFrameworkImporter = new TensorflowFrameworkImporter();
File pathToPbFile = ...;
SameDiff graph = tensorflowFrameworkImporter.runImport(pathToPbFile.getAbsolutePath(),Collections.emptyMap());
File an issue on the github repo: https://github.com/deeplearning4j/deeplearning4j/issues/new if something doesn't work for you.
Also note that if you use the tf keras api you can also import it using the keras hdf5 format (the old one).
For many graphs, you may also need to save the model and freeze it. You can use that here:
def convert_saved_model(saved_model_dir) -> GraphDef:
"""
Convert the saved model (expanded as a directory)
to a frozen graph def
:param saved_model_dir: the input model directory
:return: the loaded graph def with all parameters in the model
"""
saved_model = tf.saved_model.load(saved_model_dir)
graph_def = saved_model.signatures['serving_default']
frozen = convert_variables_to_constants_v2(graph_def)
return frozen.graph.as_graph_def()
We publish more code and utilities for that kind of thing here:
https://github.com/deeplearning4j/deeplearning4j/tree/master/contrib/omnihub/src/omnihub/frameworks

cannot import name 'ESMForMaskedLM' from 'transformers' on Google colab

I am fine tuning the ESM facebook transformer with a fasta file of sequences. However, I get ImportError: cannot import name 'ESMForMaskedLM' from 'transformers'when running the cell, I have been following: the hugging face model but I haven't managed to make the import work, I am using Google Colab. Help is much appreciated:
the code:
!pip install transformers
from transformers import ESMForMaskedLM, ESMTokenizer, pipeline
tokenizer = ESMTokenizer.from_pretrained("facebook/esm-1b", do_lower_case=False)
model = ESMForMaskedLM.from_pretrained("facebook/esm-1b")
unmasker = pipeline('fill-mask', model=model, tokenizer=tokenizer)
unmasker('QERLKSIVRILE<mask>SLGYNIVAT')

onnx_to_keras: "NotImplementedError: Can't modify this type of data"

I'm trying to import an onnx model and then convert it to Keras.
Below there is the snippet of code I use for saving the model in onnx format:
import onnxmltools
onnx_model = onnxmltools.convert_keras(model, model.name)
onnxmltools.utils.save_model(onnx_model, 'gdrive/My Drive/Model/RNN_comfort_model.onnx')
Below there is the snippet of code I use for import and conversion:
import onnx
import onnxmltools
from onnx2keras import onnx_to_keras
onnx_model = onnx.load('gdrive/My Drive/Model/RNN_comfort_model.onnx')
k_model = onnx_to_keras(onnx_model, ['lstm_5_input'], name_policy='renumerate')
When I run the last piece of code, I get
NotImplementedError: Can't modify this type of data
The following screenshot describes the whole error:
I already checked that the input_names field is correct.

ValueError while deploying tensorflow model to Amazon SageMaker

I want to deploy my trained tensorflow model to the amazon sagemaker, I am following the official guide here: https://aws.amazon.com/blogs/machine-learning/deploy-trained-keras-or-tensorflow-models-using-amazon-sagemaker/ to deploy my model using jupyter notebook.
But when I try to use code:
predictor = sagemaker_model.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
It gives me the following error message:
ValueError: Error hosting endpoint sagemaker-tensorflow-2019-08-07-22-57-59-547: Failed Reason: The image '520713654638.dkr.ecr.us-west-1.amazonaws.com/sagemaker-tensorflow:1.12-cpu-py3 ' does not exist.
I think the tutorial does not tell me to create an image, and I do not know what to do.
import boto3, re
from sagemaker import get_execution_role
role = get_execution_role()
# make a tar ball of the model data files
import tarfile
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('export', recursive=True)
# create a new s3 bucket and upload the tarball to it
import sagemaker
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
from sagemaker.tensorflow.model import TensorFlowModel
sagemaker_model = TensorFlowModel(model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
role = role,
framework_version = '1.12',
entry_point = 'train.py',
py_version='py3')
%%time
#here I fail to deploy the model and get the error message
predictor = sagemaker_model.deploy(initial_instance_count=1,
instance_type='ml.m4.xlarge')
https://github.com/aws/sagemaker-python-sdk/issues/912#issuecomment-510226311
As mentioned in the issue
Python 3 isn't supported using the TensorFlowModel object, as the container uses the TensorFlow serving api library in conjunction with the GRPC client to handle making inferences, however the TensorFlow serving api isn't supported in Python 3 officially, so there are only Python 2 versions of the containers when using the TensorFlowModel object.
If you need Python 3 then you will need to use the Model object defined in #2 above. The inference script format will change if you need to handle pre and post processing. https://github.com/aws/sagemaker-tensorflow-serving-container#prepost-processing.

Categories