tensorflow nightly wheel in python - python

I'm newbie in python.
Someone can help me with the diference between
tf-nightly and tensorflow wheels?
What I should install?
https://pypi.python.org/pypi/tensorflow vs
https://pypi.python.org/pypi/tf-nightly
I'm stuck with nightly packages. I don't know what is that.

I have searched the softwareengineering.stackexchange.com and found this:
No, it means that every night, everything that has been checked into source control is built. That build is a "nightly build".
And in the installation page TF says that:
People who are a little more adventurous can also try our nightly binaries
So, we can conjecture that the tf-nightly is only for those who are adventurous, because it may have built the untested or not sufficiently tested source code into the binary form, which may result in unexpected errors or failures.
And if you employ the "conservative" installation, pip3 install -U tensorflow the binary was built from the fully tested source code(especially by(or exposed to the eyes of) users like us), usually tagged with 1.x in the github branch.
I highly recommend you install from source code yourself, you can better tailor it and get better performance. Just follow this official tutorial. You may need to download some pdg files from elsewhere you can because nvidia is under maintenance as it stated on the related pages.

By default, you should use tensorflow, not the nightly variants. Yet, some problems that persist in the official tensorflow packages can be fixed in nightly. See ValueError: Input 0 of layer dense is incompatible with the layer: its rank is undefined, but the layer requires a defined rank.

Related

Install OpenCV from source or via Pip?

I've seen 2 ways of installing OpenCV (there might be more ways which I don't know):
Installing from the source
Installing with pip: pip install opencv-python
My question is, why we need to install OpenCV from the source while we can simply install it using pip? Since people are using both of them, both must be useful. If so, there are any conditions for selecting one of them?
I will list out the differences between both
1.
Installation using pip
Installation is done at the default location where all the python packages resides.
Installation from Source
Installation location is provided by the developer.
2.
Installation using pip
In terms of performance, the packages installed might run slower because of the hidden conflicts between features.
Installation from Source
The developer can select the optimization flags during the compilation of packages which are responsible for the fast performance of library.
3.
Installation using pip
The developers can neither add nor remove features provided in the installation done by pip.
Installation from Source
The developer has all the rights to add or remove the features during the installation of library.
4.
Installation using pip
The package manager will do the work on behalf of developer. Package Manager is also responsible for taking care of library updation.
Installation from Source
The developers are responsible for feature selection and updation of library. They must be aware of new package updates, latest security patches etc, to keep themselves updated about the library.
Hope this helps you!
OpenCV is always under development, and the thing is some parts of the library is not going to published, due to compatibility and copyright issues, but if you use the source then you can have all the capabilities that you need. SURF & SIFT are examples of this problem.

Training Faster R-CNN or Mask R-CNN with Windows Anaconda prerequisites in example

I am not experienced user in Python. I have been working with R for the years, but keras implemented there doesn't provide any reproducible examples of working with object detection architectures like Faster R-CNN. I found a lot of examples that harness Python, but I faced troubles just even run some examples from the first lines: it is all built on the downloading through pip operator (in terminal in Ubuntu or orther Linux OS), while analogues for Windows conda users are not provided.
That is, I even don't know how to install module mrcnn from the one example on my Windows machine. Should I suffer further? I have had a very bad experience trying launch compatible versions of CUDA, cudNN and other things for my keras on Ubuntu. And now I am returning to the Windows, but... keras in R still doesn't provide any suggestions for object detection techniques.
Does somebody have links for Faster or Mask R-CNN implementation with conda examples for downloading prerequisites? My googling is failed here. Or in R-keras.

Tensorflow OMP: Error #15 when training

I am training my neural network using tensorflow on CentOS HPC. However I got this error at start of the training process:
OMP: Error #15: Initializing libiomp5.so, but found libiomp5.so already initialized.
OMP: Hint: This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/.
The code is for instance segmentation and it worked fine for many people, but failed in my case.
Why it occurs? How to solve it?
I had a similar issue on macOS with the same error message (see this question) and found the following reasons:
Problem:
I had a conda environment where Numpy, SciPy and TensorFlow were installed.
Conda is using Intel(R) MKL Optimizations, see docs:
Anaconda has packaged MKL-powered binary versions of some of the most popular numerical/scientific Python libraries into MKL Optimizations for improved performance.
The Intel MKL functions (e.g. FFT, LAPACK, BLAS) are threaded with the OpenMP technology.
But on macOS you do not need MKL, because the Accelerate Framework comes with its own optimization algorithms and already uses OpenMP. That is the reason for the error message: OMP Error #15: ...
Workaround:
You should install all packages without MKL support:
conda install nomkl
and then use
conda install numpy scipy pandas tensorflow
followed by
conda remove mkl mkl-service
For more information see conda MKL Optimizations.
I solved this problem by asking a HPC server expert. Maybe useful for Compute Canada system users.
Why it occurs?
This error is due to conflict between a tensorflow pre-built Python wheel(which is specific for Compute Canada system) and conda environment.
Quote : "conda is always a bit problematic because it downloads precompiled binaries, mileage may vary..."
How to solve it?
As #abccd pointed out "The best thing to do is to ensure that only a single OpenMP runtime is linked into the process". However, I haven't figured out how to ensure that.
So I uninstalled conda, and install everything in module system using pip install. Then the network works fine.
I solved, as explained by the message, by adding:
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
Simply downgrading my version of TensorFlow using Anaconda did it for me.

Python - ensure I'm running the same package versions in both Windows and Linux

I have a Windows 10 machine that I'm using to develop my code (Anaconda 3.5). Now I need to get my code running on a Linux server, so that others can use it as part of an application. What is the best way of setting up and maintaining my Linux environment so that it replicates the Windows one in terms of packages and version numbers?
I'm training and saving DataFrames, SVMs (Sklearn) and ANNs (Keras) in my Windows environment, which is running Anaconda Python 3.5.
On the Linux server I need to be able to load and use these models, which requires having the same packages and package versions.
How do I keep the environments running the same package versions?
The plan is to release newer and better models as I get more data. These might run on newer versions of Keras, Sklearn etc. as versions are released. How can I ensure that in Python I can have the latest package versions but still be able to run older models (possibly trained and saved using older package versions) if required? Backwards compatibility is very important.
Background:
I'm creating a 'sizing algorithm' that uses a number of ANNs and SVMs. For others to use this algorithm it's going to be running on a Linux server and somehow (the software guy ensures me it can be done) integrated, or linked, into the companies software. The different models will be loaded and saved to memory and used when called to size something. It is important that the older sizing algorithms can still be used even as I release newer, better versions.
Apparently I am the companies Python expert, even though I have only been using it since January and have no experience in releasing algorithms for others to use. I would really appreciate your help in the best way of setting up the system.
Many thanks
On a machine with the correct packages:
pip freeze > requirements.txt
On machines that need the correct packages, having copied that file to it:
pip install -r requirements.txt

How do I run the Deep Dream source code?

(I downloaded the deep dream source code from https://github.com/google/deepdream)
First of all, I'm not interested in purely Deep Dream only, but machine learning, and deep learning in particular, as a whole. I know programming (but by no means an expert) and python syntax etc. However, I'm not familiar with external libraries and how to properly install them.
Thus, I'm struggling with simply getting the source code for Deep Dream to run. Here's what I've done so far:
Installed Python, but it couldn't run the .ipynb (nor did it include any of the libraries) file so I:
Installed Anaconda, but it didn't include Caffe so I:
Downloaded Caffe, but it requires cudNN(??) so I:
Downloaded cudNN (Does it require Cuda (whatever that is?))
What are the next steps? There are so many things to download and install and I have no experience with any of it except for Python programming itself.
I tried reading the install instructions but they get me even more confused.
What are the steps I should take next in order to get it running?
Keep in mind that I'm a beginner. No hate please. Official documentation and terminology are still hard to understand. I'm simply looking for step-by-step instructions.
Thanks in advance!
Edit: I'm using Windows
[Promoted from a comment]
If you're not familiar with it, Docker is going to be your easiest option. Think of a docker container as as a portable, fully self-contained VM.
You can install docker on almost any OS, then use it to load a container which has all the software pre-installed.
You can get docker here and you can get the CPU / GPU container by following the instructions here.
Note that Docker's really handy for other things too - eg I have containers for Centos 6/6.5/7, RHEL, SLES, Windows, etc... for testing and as servers.

Categories