Chainer - predict using GPU - python

I have a trained Chainer model that I want to use to perform predictions. I can predict images on CPU by default, but I want to use a GPU, and I'm not sure how I should do it.
Here is what my code looks like:
model = MyModel()
chainer.serializers.load_npz("snapshot", model)
image = load_image(path) # returns a numpy array
with chainer.no_brackprop_mode(), chainer.using_config("train", False):
pred = model.__call__(image)
This works fine on CPU. What should I add to it to predict on GPU ?
I tried:
model.to_gpu(0)
chainer.cuda.get_device_from_id(0).use()
convert image to a CuPy array with image = cupy.array(image)
With all of these options, I get an error:
ValueError: numpy and cupy must not be used together
type(W): <type 'cupy.core.core.ndarray'>, type(x): <type 'numpy.ndarray'>
What am I doing wrong here ? How do I perform predictions on GPU ?
Thanks in advance for help.

I finally figured out a way to have it to work:
Instead of using cupy.array(image), I used cuda.to_gpu(image) and then cuda.to_cpu(image). I'm not sure of the difference between these but still, it works.

Related

Convert PyTorch to CoreML

I have this model: https://github.com/williamyang1991/DualStyleGAN and try to convert it to CoreML. So far I create copy of original Colab notebook and append at the end two blocks:
!pip install coremltools
import coremltools as ct
and
##title Convert inverted image.
inverted_latent = torch.Tensor(result_latents[0][4]).cuda().unsqueeze(0).unsqueeze(1)
with torch.no_grad():
net.eval()
[sampled_src, sampled_dst] = net(inverted_latent, input_is_latent=True)[0]
traced_model = torch.jit.trace(net, inverted_latent)
mlmodel = ct.convert(traced_model, inputs=[ct.ImageType(name="input", shape=inverted_latent.shape,bias=[-1,-1,-1],scale=2.0/255.0)])
mlmodel.save("modelsaved.mlmodel")
To run it, you should put any image with face to /content and in /usr/local/lib/python3.7/dist-packages/torchvision/transforms/functional.py
replace round method at 545, 546 lines with np.round
But then it fails at
mlmodel = ct.convert(...
with:
RuntimeError: PyTorch convert function for op 'pythonop' not implemented.
I suggest that there the way to rewrite this module with methods that could be convert, am I right? But I can't to figure out how to find the source of this module.
So my question is:
If I think in a right way, how I can find the source of module?
And if I wrong, please advise me the right way to do it.
The code starts by loading the model into PyTorch.
The code then converts the model into CoreML format and saves it to a .mlmodel file.The code below will take the existing PyTorch model and convert it into a CoreML model with input and output features.
The outputs are saved in the file example.mlmodel which can be opened in Xcode or any other development environment that supports CoreML models.
import torch
import coremltools
model = torch.load('MyPyTorchModel.pt')
coreml_model = coremltools.converters.pytorch.from_pytorch(model,
input_features=
['input'],
output_features=
['output'])
coreml_model.save('MyCoreMLModel.mlmodel')

FastAI : Getting Prediction of Image from Learner

I want to be able to predict the Class of a Single Image from the Learner and i always get an Index Out of Bound Exception .
Here is the Code
data = ImageDataLoader.from_folder(path, train="Train", valid ="Valid",
ds_tfms=get_transforms(), size=(256,256), bs=32, num_workers=4)
//Model is a Sequential One
learn = Learner(data, model, loss_func = nn.CrossEntropyLoss(), metrics=accuracy)// The Model
learn.fit_one_cycle(100, lr_max=3e-3)
Img = //PIL Image Path
learn.predict(img)
The Model is able to Predict on ImageDataLoader but not on a Single Image .If anyone has any clue it would be much appreciated
Here is a Link to FastAi but didnt solve the issue
https://forums.fast.ai/t/how-to-use-learner-predict-list-index-out-of-range/81998/7
EDIT NOTE : I have tried to convert the Image to a tensor Flow but another error is given .Photo of the Error
I think that the problem is that data is a rank 4 tensor whereas Img is rank 3. In other words, it is missing the #points or batch dimension up front. In TF one can fix that with tf.expand_dims like so
img = tf.expand_dims(img, axis=0)
or fixing it when passing to the model
learn.predict(tf.expand_dims(img, axis=0))
You can also look at tf.newaxis (see the second code example here).

How to convert a style transfer tensorflow model to mlmodel with flexible input shape?

I have read the Coreml guide which shows how to convert a pb model to mlmodel by using coremltools. However, I get the error below when trying to follow the guide. Which means the input shape must be specific.
ValueError: "ResizeBilinear" op: the second input, which is the output size, must be known statically
So, have anyone know how to convert the flexible input shape mlmodel?
Here is my code:
import coremltools as ct
def mlmodel_image(pb):
input_shape = ct.Shape(shape=(1, ct.RangeDim(1, 720), ct.RangeDim(1, 1280), 3))
model_input = ct.ImageType(shape=input_shape)
mlmodel = ct.convert(pb, inputs=[model_input], source='TensorFlow')
mlmodel.save(pb.replace(".pb", "_img.mlmodel"))
print('------save to ', pb.replace(".pb", "_img.mlmodel"))
please try my sample:
https://github.com/dhrebeniuk/RealTimeFastStyleTransfer
And look my article with attached Google Colab Notebook in PyTorch.
There is instructions how run Style Transfer on iOS with maximum performance.

Change input size of ONNX model

I need to change the input size of an ONNX model from [1024,2048,3] to [1,1024,2048,3].
For this, I've tried using update_inputs_outputs_dims by ONNX
import onnx
from onnx.tools import update_model_dims
model = onnx.load("./0818_pspnet_1.0_713_resnet_v1/pspnet_citysc.onnx")
updated_model = update_model_dims.update_inputs_outputs_dims(model, {"inputs:0":[1,1024,2048,3]}, {"predictions:0":[1, 1025, 2049, 1]})
onnx.save(updated_model, 'pspnet_citysc_upd.onnx')
However, this is the error I end up with.
ValueError: Unable to set dimension value to 1 for axis 0 of inputs:0. Contradicts existing dimension value 1024.
The ONNX model is exported from a Tensorflow frozen graph of PSPNet. If the above approach does not work, would I need to modify the frozen graph?
Any help is greatly appreciated.
You can use the dynamic shape fixed tool from onnxruntime
python -m onnxruntime.tools.make_dynamic_shape_fixed --dim_param batch --dim_value 1 model.onnx model.fixed.onnx

Exporting and loading tf.contrib.estimator in Tensorflow 1.3 for prediction in python without using Tensorflow Serving

I am using tf.contrib.learn.DNNLinearCombinedRegressor with wide and deep feature columns. I use an input_fn to send data to this regressor for training. I have both real and categorical features, and so my deep feature columns are made of sparse columns with embedding as well as real valued columns. While my linear columns are real valued columns. After training, I can naively access the trained model for prediction by using the estimator.predict_scores() method. However, to my understanding, this recreates the tensorflow graph from the checkpoint file, which is slow for my intended application. I then tried to use estimator.export_savedmodel(), to see if I can save and load the graph prior to actual prediction in my application. But I got stuck with it. I am facing the following issues:
To export, I did the following:
feature_spec = tf.feature_column.make_parse_example_spec(deep_feature_cols)
export_input_fn = tf.contrib.learn.utils.build_parsing_serving_input_fn(feature_spec)
servable_model_path = regressor.export_savedmodel(export_dir_base = servable_model_dir, serving_input_fn = export_input_fn, as_text= True)
(note that the raw and default parsing functions didn't work for me due to VarLen feature column -- the embedding column). Since I construct feature_spec with my deep columns only, I don't understand how the saved model will know what I am using for my linear columns. I don't know how to combine the two types of columns for saving/exporting.
While I was able to export the trained regressor model successfully, I couldn't figure out how to use it for predictions. I have seen tutorials for doing this using Tensorflow Serving such as this. However, I just want something simple that I load and use in python. I am working on an interactive GUI application in Windows and so don't want to use bazel to build a saved model for Tensorflow Serving. I tried the following from this post, but it didn't work with my input_fn
from tensorflow.contrib import predictor
predict_fn = predictor.from_saved_model(servable_model_path)
predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1, shuffle=False))
where get_input_fn is my custom input_fn. When I run this code, I get the following error:
AttributeError Traceback (most recent call last)
<ipython-input-15-128f4d1ba155> in <module>()
----> 1 predictions = predict_fn(get_input_fn(X, y, LABEL, num_epochs=1,
shuffle=False))
~\AppData\Local\Continuum\Anaconda3\envs\tensorflow\lib\site-
packages\tensorflow\contrib\predictor\predictor.py in __call__(self, input_dict)
63 """
64 # TODO(jamieas): make validation optional?
---> 65 input_keys = set(input_dict.keys())
66 expected_keys = set(self.feed_tensors.keys())
67 unexpected_keys = input_keys - expected_keys
AttributeError: 'function' object has no attribute 'keys'
Any input is appreciated. I am on windows 10, using Tensorflow 1.3 with Python 3.5 in Anaconda virtual environment.
Poked around a bit more and the predict_fn expects a dictionary with the key set to the exact same name as the feature variable your model expects. In my CNN model the input was "inputs" as below:
def _cnn_model_fn(features, labels, mode):
# Input Layer
print("features shape", features['inputs'])
Here's what I passed in the predict_fn where myimage is a flattened np.array.
predictions = predict_fn({"inputs": [myimage]})

Categories