Exporting my Tensorflow model and code to a different PC - python

I have referred to a number of tutorials and built an object detection model using Faster-RCNN on an Anaconda Virtual Environment. Now I want to show case this model, and find problem when I run it on a different system without Anaconda, I try running it on CMD. In fact, it doesn't run at all.
I have done my research on exporting the model but hit a deadend each time.
I use Anaconda Prompt + Windows 10 + NVidia GPU + Tensorflow-gpu=1.5 to run model on my dedicated system.
I would want to know how can I export this to a different PC which doesn't have the GPU or the Anaconda installed. Or am I completely wrong in the approach and need all the dependencies used when I run it on my system?

Related

Can two kernels access the same conda environment at the same time even when using GPU?

I'm running one kernel to learn a Tensorflow model and that's using my GPU. Now, in the same conda environment, I would like to evaluate another model learned before, and the model is also a Tensorflow one. I'm sure I can run two kernels with the same conda environment mostly but I'm not sure when using GPU. Now if I run a kernel using Tensorflow, can it affect a kernel running early somehow, especially in terms of GPU usage?
My environment: Windows10, tensorflow2.1, python3.7.9
This is not the best answer though, I realized that I can evaluate my model in another conda environment that has another version of Tensorflow. In this environment, my CUDA and CUDNN versions are not compatible with the version of Tensorflow, so my GPU was not used. In this sense, I evaluated a model without stopping or affecting learning a model in the running kernel.

how to run GPU on Mac OS Big Sur/Jupyter notebook

I am trying to create a GPU environment in Jupyter notebook to run CNN models but have had trouble. I am on MacOS (Big Sur) and was following the instructions from: https://www.techentice.com/how-to-make-jupyter-notebook-to-run-on-gpu/
First, to create a separate GPU environment in Jupyter understand that I need CUDA toolkit. However, found out that CUDA toolkit no longer supports Mac.
Second, understand that I have to download tensor flow GPU which apparently doesn't support MAC/python 3.7.
would be grateful for any help or advice please. essentially I just want to be able to run my code on GPU as CPU is way too slow for machine learning models. is there any way around this?

Tensorflow in pycharm

I'm trying to use Tensorflow in Pycharm, I have selected the Python interpreter Anaconda in the setting, and I have added the package Tensorflow but it doesn't seem working. Plus I did the installation with the Anaconda prompt writing pip install tensorflow but it still not working and obtain this error:
No module named 'tensorflow'
Someone could help me? Thank you so much
Tensorflow can be a bit of a pain to install, the process is completely different if you are doing it outside anaconda so I won't go into that.
This documentation is particularly helpful and what I have used to get tensorflow working on my own pc
https://docs.anaconda.com/anaconda/user-guide/tasks/tensorflow/
If you are doing cpu only stuff in tensorflow then running this in an anaconda command prompt will create an env for you to work on tf.
conda create -n tf tensorflow
conda activate tf
If you want to use your GPU with tensorflow then you need to check various things such as windows and linux will only support CUDA 10.0 for tensorflow 2.0. That being said you can use the following to set up a GPU env:
conda create -n tf-gpu tensorflow-gpu
conda activate tf-gpu
Be aware that this may not result in a working env depending on your GPU ect, So I would recommend that you refer to this page: https://www.tensorflow.org/guide/gpu
As a personal side note: I would highly recommend using jupyter lab when organising and running machine learning tasks as you can split up codes into cells with markdown decriptions of what occurs in cells which I find really helpful for readability and organisation.

installing tensorflow without internet connection

I have to install TensorFlow + Keras on my computer, but I have not internet connection on it, so I have to download this package on another computer and then pass it on mine.
My question is: where can I saefty downolad TensorFlow + Keras and then, how can I install it using Anaconda?
Thanks a lot for helping!
I will just assume you are using a Linux machine
For tensorflow, you can follow the guide provided by florian.
Step 1
For Keras you just need to git clone or download the repository in https://github.com/keras-team/keras
Once you cloned or downloaded the repository on your machine connected to the internet, pass it to the one you want to use it one.
Step 2
Open a terminal and navigate to the passed keras folder with cd. run the setup.py by tipping ./setup.py into the console and you are done.
Step 3
To verify the installation you can run one of the examples. Navigate to the examples folder inside the keras folder and type ./mnist_cnn.pyinto the console. If everything was installed right you should see the network training and output. Otherwise, an error message will be displayed in the console.
Here is the windows version for keras and anaconda
Step 1
Copy the downloaded folder of keras to %USERPROFILE%\Anaconda3\Lib\site-packages. Then use cd %USERPROFILE%\Anaconda3\Lib\site-packages\keras to get to the keras folder.
Step 2
in the same terminal type python setup.py develop to install keras on windows.
Step 3
To check keras navigate into the examples folder and run the same example is in the Linux step 3.
My windows skills are very rusty, so I won't be able to help you troubleshoot problems here. I recommend you to install Linux on a second partition if you want to dive into deep learning since it is a lot easier to set up a system for DL on Linux than on Windows, also if you want to use AWS later it is chaper on Linux than windows.

Python - ensure I'm running the same package versions in both Windows and Linux

I have a Windows 10 machine that I'm using to develop my code (Anaconda 3.5). Now I need to get my code running on a Linux server, so that others can use it as part of an application. What is the best way of setting up and maintaining my Linux environment so that it replicates the Windows one in terms of packages and version numbers?
I'm training and saving DataFrames, SVMs (Sklearn) and ANNs (Keras) in my Windows environment, which is running Anaconda Python 3.5.
On the Linux server I need to be able to load and use these models, which requires having the same packages and package versions.
How do I keep the environments running the same package versions?
The plan is to release newer and better models as I get more data. These might run on newer versions of Keras, Sklearn etc. as versions are released. How can I ensure that in Python I can have the latest package versions but still be able to run older models (possibly trained and saved using older package versions) if required? Backwards compatibility is very important.
Background:
I'm creating a 'sizing algorithm' that uses a number of ANNs and SVMs. For others to use this algorithm it's going to be running on a Linux server and somehow (the software guy ensures me it can be done) integrated, or linked, into the companies software. The different models will be loaded and saved to memory and used when called to size something. It is important that the older sizing algorithms can still be used even as I release newer, better versions.
Apparently I am the companies Python expert, even though I have only been using it since January and have no experience in releasing algorithms for others to use. I would really appreciate your help in the best way of setting up the system.
Many thanks
On a machine with the correct packages:
pip freeze > requirements.txt
On machines that need the correct packages, having copied that file to it:
pip install -r requirements.txt

Categories