Keras create dataset from CSV without TensorFlow - python

On every question and tutorial I have found, tf.data.Dataset is used for CSV files, but I am not using tensorflow, I am using PlaidML because my AMD GPU is not supported in ROCm. I have tried using the same code by doing
os.environ["KERAS_BACKEND"] = "plaidml.keras.backend"
from tensorflow import keras
but that still does not use the plaidml backend. How do I load this dataset: https://www.kaggle.com/keplersmachines/kepler-labelled-time-series-data into keras without tensorflow? Thank you. I check if the plaidml backend is used by looking at the output. If it says "Using plaidml.keras.backend backend", then it is using plaidml. Also, only plaidml recognizes my GPU, so tensorflow will use my CPU.

Related

How to convert Tensorflow 2.* trained with Keras model to .onnx format?

I use the Python 3.7.4 with TensorFlow 2.0 and Keras 2.2.4-tf to train my own CNN model. Everything goes fine. I can use e.g. model.save(my_model), and then use it in other Python scripts. Problem appears when I want to use trained model in OpenCV with its DNN module in C++. cv::dnn:readNetFromTensorflow(model.pb, model.pbtxt), takes as you can see two arguments, and I can't get the second .pbtxt file. So I decide to use .onnx format, because of its flexibility. The problem is that existing libraries keras2onnx takes only model from TensorFlow 1.*, and I want to avoid working with it. Example of code to convert it is presented below:
import tensorflow as tf
import onnx
import keras2onnx
model = tf.keras.models.load_model(my_model_folder_path)
onnx_model = keras2onnx.convert_keras(model, model.name)
onnx.save_model(onnx_model, model_name_onnx)
Is there some other ways to convert such model to onnx format?
The latest version of keras2onnx (in github master) supports TensorFlow 2.
You can install it like this:
pip install git+https://github.com/microsoft/onnxconverter-common
pip install git+https://github.com/onnx/keras-onnx
You need to create a file which can hold ONNX object. Visit https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb
import tensorflow as tf
import onnx
import keras2onnx
model = tf.keras.models.load_model('Model.h5')
onnx_model = keras2onnx.convert_keras(model, model.name)
file = open("Sample_model.onnx", "wb")
file.write(onnx_model.SerializeToString())
file.close()

Tensorflow-Keras: Loading weights from checkpoint increases graph size significantly

I am working with a rather large network (98 million parameters), I am using the Keras ModelCheckPoint callback to save my weights as follows, when I reload my saved weights using keras, I can see that the loading operation adds approximately 10 operations per layer in my graph. This results in a huge memory increase of my total network. Is this expected behavior? And if so, are there any known work arounds?
Details:
I am using: tf.keras.callbacks.ModelCheckpoint with "save_weights_only=True" as argument to save the weights
The code for loading it is:
model.load_weights(path_to_existing_weights)
where model is a custom keras model.
I am using Tensorflow 1.14 and Keras 2.3.0
Anyone that has any ideas?
This seems to me to be unexpected behavior but I can't see anything obvious that you are doing wrong. Are you sure there were no changes to your model between the time you saved the weights and the time you reloaded the weights? All I can suggest is try to do the same thing except this time in the callback change it to save the entire model. Then reload the model then check the graph. I also ran across this, doubt it is the problem but I would check it out
In order to save your Keras models as HDF5 files, e.g. via keras.callbacks.ModelCheckpoint,
Keras uses the h5py Python package. It is a dependency of Keras and should be installed by default.
If you are unsure if h5py is installed you can open a Python shell and load the module via
import h5py If it imports without error it is installed, otherwise you can find detailed installation
instructions here: http://docs.h5py.org/en/latest/build.html
Perhaps you might try reinstalling it.

Fix for "Check Weather your Graph def interpreting binary is up to date with your Graph def generating binary "

I am running tensorflow with react native. I have a retrained Inception V3 graph. I used a GitHub repo example to test if a model other than my own would work, and it functioned perfectly well. When I attempt to use my own model, I get the Error: "Check whether your GraphDef interpreting binary is up to date with your GraphDef generating binary"
Dev Info{Python 3.5, react-Native 0.59, tensorflow 2.0.0a0, protobuf 3.7.1}From what I have seen, I have attempted training my model on an older version of tensorflow, (I was using 1.13.1, I tried 1.8.0). I heard that my version of tensorflow and protobuf may be too high to interpret my .pb file. This did not work though, and I received the exact same error.
Here is the recognition code:
async recognizeImage() {
try {
const tfImageRecognition = new TfImageRecognition({
model:require('./assets/retrained_graph.pb'),
labels: require('./assets/retrained_labels.txt')
})
const results = await tfImageRecognition.recognize({
image: this.image
})
On my docker container (where I running tensorflow serving) I have:
TensorFlow ModelServer: 2.1.0-rc1
TensorFlow Library: 2.1.0
The problem is related with your local tensorflow version that you use for export your protobuf model. I know that if you export your h5 model with tf versions 1.14.0, 2.1.0 and 2.2.0 you will have this problem at the time of perform inference. You can try to use tf versions >1.15.0 and less than 1.8.0. I think this happen because some tensorflow version doesn't support a particular layer at the time of export.
For change your local tensorflow version you can do
pip install tensorflow==1.15.0

Import MXNet file in Keras/Tensorflow

I am having trouble finding the answer to this.
I have an MXNet file in the form of:
model.json and model.params. What is the cleanest way to load the network into a Keras installation with TensorFlow backend?
Unfortunately, you cannot load native MXNet models into Keras.
You can try to convert your model using MMdnn, but depending on complexity of your model it might not work.

Loading Torch7 trained models (.t7) in PyTorch

I am using Torch7 library for implementing neural networks. Mostly, I rely on pre-trained models. In Lua I use torch.load function to load a model saved as torch .t7 file. I am curious about switching to PyTorch( http://pytorch.org) and I read the documents. I couldn't find any information regarding the mechanisms to load a pre-trained model. The only relevant information I was able to find is this page:http://pytorch.org/docs/torch.html
But the function torch.load described in the page seems to load a file saved with pickle. If someone has additional information on loading .t7 models in PyTorch, please share it here.
The correct function is load_lua:
from torch.utils.serialization import load_lua
x = load_lua('x.t7')
As of PyTorch 1.0 torch.utils.serialization is completely removed. Hence no one can import models from Lua Torch into PyTorch anymore. Instead, I would suggest installing PyTorch 0.4.1 through pip in a conda environment (so that you can remove it after this) and use this repo to convert your Lua Torch model to PyTorch model, not just the torch.nn.legacy model that you cannot use for training. Then use PyTorch 1.xx to do whatever with it. You can also train your converted Lua Torch models in PyTorch this way :)

Categories