How to use Tensorflow in local python project - python

I have a model created in Tensorflow that is already trained and accurate. How do I get this to run in a project? I can load the model but I can't figure out how to feed it a single image that my software is generating.
Also if this was a transfer learning project do I have to load and create a model before loading the weights?
All the tutorials are on how to set it up in the cloud or with a local server which I would like to avoid. I am tempted to save the data and then run it but that is a lot slower.
Addition:
My environment I am building this for is a google colab Jupyter notebook. The idea is to have no installation for users which is why it must be self contained.

Related

Pickling a file

I using kedro in conjunction with databricks-connect to run my machine learning model. I am trained and tested the model using databricks notebook and saved the pickle file of the model to azure blob storage. To test the pipeline locally I downloaded the pickle file and stored it locally. kedro reads in the file fine when stored locally, however the problem is when I try to read in the file into kedro directly from azure; I get the following error : "Invalid Load Key '?'
Unfortunately I cannot post any screenshots since I am doing this for my job. My thinking is that the pickle file is stored differently while on azure and when downloaded locally it is stored correctly.
For a bit more clarity I am attempting to store a logistic Regression model trained used statsmodels (sm.logit)
Pickles unfortunately aren't guaranteed to work across environments, at the very least the python version (and possibly OS) need to be identical. Is there a mismatch here?

Do I need the Tensorflow Object Detection API to use a trained model I made on a different computer?

Working on my local computer, I've created a Tensorflow Object Detector. I have exported the model (which I've tested using the checkpoints) to a protobuf file as well as several others (TF lite, TF js, etc). I now need to transfer this trained model to another computer that doesn't have the Object Detection API or other things I needed to build the model.
Do I need all these dependencies on the new machine? Or, does the protobuf file contain everything that the machine will need? The new machine only has the basic anaconda environment packages as well as tensorflow.
Protobuf files most commonly contains both model and weights. So in theory you can load your model on any machine with TensorFlow.
The only problem that I can think of is saving custom layers/losses/optimizers and data pre/postprocessing.

How to run your own python code on amazon sagemaker

I have a python code which uses keras and tensorflow backend. My system doesn't support training this model due to low memory space. I want to take use of Amazon sagemaker.
However all the tutorials I find are about deploying your model in docker containers. My model isn't trained and I want to train it on Amazon Sagemaker.
Is there a way to do this?
EDIT : Also can I make a script of my python code and run on it on AWS sagemaker?
SageMaker provides the capability for users to bring in their custom training scripts and train their algorithms using the script it on SageMaker using one of the pre-built containers for frameworks like TensorFlow, MXNet, PyTorch.
Please take a look at https://github.com/aws/amazon-sagemaker-examples/blob/master/frameworks/tensorflow/get_started_mnist_train.ipynb
It walks through how you can bring in your training script using TensorFlow and train it using SageMaker.
There are several other examples in the repository which will help you answer other questions you might have as you progress on with your SageMaker journey.

Is it possible to make predictions with a keras/TensorFlow model without downloading TensorFlow and all its dependancies?

I'm trying to use a custom keras model I trained with tensorflow-gpu on my desktop with Python on a mobile phone (Android), however I need to run it with Python on the phone as well. I looked up TensorFlow Lite, however that appears to be written for Java.
Is there any lite (Python) version of TensorFlow, some kind of barebones package that's just set up for making predictions from a TensorFlow/keras model file? I'm trying to focus on saving space, so a solution under 50mb would be desired.
Thanks
TensorFlow Serving was built for the specific purpose of serving pre-trained models. I'm not sure if it runs (or how difficult to make it run) on Android or what it's compiled footprint is, if it's less than 50MB. If you can make it work, please do report back here!

I trained my model on google datalab but I can't run it on my local machine

I trained my model on google datalab and then exported my model to make predictions on a seperate machine. When I load the model in datalab it works perfectly, but When I load the model on my local machine, I get weird errors such as:
Segmentation 11
my assumption is that my environment variables are different in both environments. How can I match both environments so that they have the same environment variables. Any guidance will be very appreciated
this is the code I use to load my model
from keras.models import load_model
model = load_model('new_model.h5')
I understand that you mean Python related environment variables.
Maybe you might check this manually to compare which are the differences.

Categories