theano g++ not detected - python

I installed theano but when I try to use it I got this error:
WARNING (theano.configdefaults): g++ not detected! Theano will be unable to execute
optimized C-implementations (for both CPU and GPU) and will default to Python
implementations. Performance will be severely degraded.
I installed g++, and put the correct path in the environment variables, so it is like theano does not detect it.
Does anyone know how to solve the problem or which may be the cause?

I had this occur on OS X after I updated XCode (through the App Store). Everything worked before the update, but after the update I had to start XCode and accept the license agreement. Then everything worked again.

On Windows, you need to install mingw to support g++. Usually, it is advisable to use Anaconda distribution to install Python. Theano works with Python3.4 or older versions. You can use conda install command to install mingw.

I solved this probelm just now on Windows 10 with Anaconda3.
First apply
conda install mingw
in the command line.
If one comes across this problem
CondaIOError:
IO error: Missing write permissions in: C:\ProgramData\Anaconda3"
change the attribute in the safety tab of the folder in which you installed Anaconda; make sure user has write permissions to this folder.

This is the error that I experienced in my mac running jupyter notebook with a python 3.5 kernal hope this helps someone, i am sure rggir is well sorted at this stage :)
Error
Using Theano backend.
WARNING (theano.configdefaults): g++ not detected ! Theano will be unable to execute optimized C-implementations (for both CPU and GPU) and will default to Python implementations. Performance will be severely degraded. To remove this warning, set Theano flags cxx to an empty string.
Cause
update of XCode (g++ compiler) without accepting terms and conditions, this was pointed out above thanks Emiel
Resolution:
type g++ --version in the mac terminal
"Agreeing to the Xcode/iOS license requires admin privileges, please re-run as root via sudo." is output as an error
launch Xcode and accept terms and conditions
return g++ --version in the terminal
Something similar to the following will be returned to show that Xcode has been fully installed and g++ is now available to keras
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
Apple LLVM version 8.0.0 (clang-800.0.42.1)
Target: x86_64-apple-darwin15.6.0
Thread model: posix
InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
Restart you machine… I am sure there are some more complicated steps that someone smarter than me can add here to make this faster
Run the model.fit function of the keras application which should run faster now … win!

Run the following command in centos
yum install gcc-c++
then it will work

I had this issue also on MAC. I also couldn't install XCode through the app store so instead installed through the terminal using:
xcode-select --install

Related

After install and uninstall a package, python3 crashes with error "Segmentation fault (core dumped)"

The steps leading to the problem is as follows:
I installed a package using pip. The package is here and its install doc is here. The command I used is
pip install --upgrade tensorflow-graphics-gpu
Because I don't have super user privilege, the package was installed in my user space. pip prompted me for that.
I uninstalled the package using
pip uninstall tensorflow-graphics-gpu
I started python3 and type
import tensorflow as tf
This statement worked fine before. But this time, python quits with an error:
Segmentation fault (core dumped)
This is a screenshot: enter image description here.
The environment is as follows:
A remote Linux. Core version 5.8.0. I am not not a super user.
Python 3.8.6
CUDA 11.1
CPU: Core i9-10900K
nVidia RTX GPU
The same error crashes python if I tried to import PyTorch. The sys admin is very disagreeable so I can get no help from him, not to mention upgrading drivers or reinstalling python. I tried to clear cashes in my user space that I know of, but I didn't have luck. I searched internet for a solution but of no avail.
Can someone please tell me how to fix this issue? Thanks a lot.
I tried to clear cashes in my user space that I know of, but I didn't have luck.
It seems pretty clear that something in your $HOME directory is still being used, and is causing the system python to crash.
To discover what that something is, you can look at which files are being opened using this command:
strace -e file python -c 'import tensorflow'
Once you know which files are being opened, remove/reinstall corresponding packages, and you should be back in business.

Tensorflow GPU / CUDA installation on Ubuntu

I have set up a Ubuntu 18.04 and tried to make Tensorflow 2.2 GPU work (I have an Nvidia/CUDA graphic card) with Python.
Even after reading the documentation https://www.tensorflow.org/install/gpu#linux_setup, it failed (see below for details about how it failed).
Question: would you have a canonical "todo" list (starting point: freshly installed Ubuntu server) on how to install tensorflow-gpu and make it work, with a few steps?
Notes:
I have read many similar forum posts, and I think that having a canonical "todo" (from a fresh Ubuntu install to having tensorflow-gpu working) would be interesting, with a few steps/bash commands
the documentation I used involved
export LD_LIBRARY_PATH...
# Add NVIDIA package repository
sudo apt-key adv --fetch-keys http://developer.download...
...
# Install CUDA and tools. Include optional NCCL 2.x
sudo apt install cuda9.0 cuda...
Even after a lot of trial and errors (I don't copy/paste all the different errors here, would be too long), then at the end:
import tensorflow
always failed. Some reasons included `ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory. I have already read the relevant question here, or this very long (!) Github issue.
After some trial and error, import tensorflow works, but it doesn't use the GPU (see also Tensorflow not running on GPU).
Well, I was facing the same problem. The first thing to do is to look up, which Tensorflow version is required. In your case Tensorflow 2.2. requires CUDA 10.1. The correct cuDNN version is also important. In your case it would be cuDNN 7.4. An additional point is the installed python version. I would recommend Python 3.5-3.8. If one those mismatch, a fully compatibility is almost impossible.
So if you want a check list, here you go:
Install CUDA 10.1 by installing nvidia-cuda-toolkit.
Install the cuDNN version compatible with CUDA 10.1.
Export CUDA environment variables.
If Bazel is not installed, you will be asked on that.
Install TensorFlow 2.2 using pip. I would highly recommend the usage of a virtual environment.
You can find the compatibility check list of Tensorflow and CUDA here
You can find the CUDA Toolkit here
Finally get cuDNN in the correct version here
That's all.
I faced the problem as well when using the Google Cloud Platform for two projects involving deep learning. They provide servers with nothing but a freshly installed Ubuntu OS. Regarding my experience, I recommend doing the following steps:
Look up the cuda and cuDNN version supported by the current Tensorflow release on the Tensorflow page.
Install the targeted cuda version from the deb package retrieved from Nvidias cuda page and be careful that more recent cuda versions might not work! This will automatically install the corresponding Nvidia drivers.
Install the targeted cuDNN version from this page and again be careful that a more recent cuDNN version might not work.
Install tensorflow-gpu using pip.
This should work. Your problem is probably that you are using a more recent cuda version than targeted by the current Tensorflow release.
To install tensorflow-gpu, the guidelines which are provided on official website are very tedious for beginers, instead we can do these simple steps:
Note : NVIDIA driver must be installed before this(you can verify this using command nvidia-smi).
Install Anaconda https://www.anaconda.com/distribution/?
Create an virtual environment using command "conda create -n envname"
Then activate env using command "conda activate envname"
Finally install tensorflow using command "conda install tensorflow-gpu"
With the given code
import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device{}'.format(tf.test.gpu_device_name()))
else:
print("not using gpu")
You can find the tutorial on link given below
https://www.pugetsystems.com/labs/hpc/Install-TensorFlow-with-GPU-Support-the-Easy-Way-on-Ubuntu-18-04-without-installing-CUDA-1170/?
I would suggest to first check the availability of GPU using nvidia-smi command.
I had faced the same issue, i was able to resolve it by using docker container, you can install docker using Install Docker Engine on Ubuntu or use the Digital Ocean guide (i used this one) How To Install and Use Docker on Ubuntu 18.04
After that it is simple just run the following command based on the requirements
NV_GPU='0' nvidia-docker run --runtime=nvidia -it -v /path/to/folder:/path/to/folder/for/docker/container nvcr.io/nvidia/tensorflow:17.11
NV_GPU='0' nvidia-docker run --runtime=nvidia -it -v /storage/research/:/storage/research/ nvcr.io/nvidia/tensorflow:20.12-tf2-py3
Here '0' represents the GPU number, if you want to use more than one GPU just use '0,1,2' and so on ....
Hope this solves the issue.

How to install python cntk in CentOS linux without anaconda

I have a CentOS 7.4 GPU machine.
I tried to install cntk in the machine. I cannot use the pip files as they are for ubuntu (I tried to install with them and get seg fault when I import cntk).
I compiled the cloned cntk successfully. However I when I try to get the python version, I run into troubles. I did
sudo python setup.py install
in $cntk_root/bindings/python
and get
building '_cntk_py' extension
swigging cntk/cntk_py.i to cntk/cntk_py_wrap.cpp
swig -python -c++ -D_MSC_VER -I../../Source/CNTKv2LibraryDll/API -I../../bindings/common -Werror -threads -o cntk/cntk_py_wrap.cpp cntk/cntk_py.i
cntk/cntk_py.i:92: Error: Syntax error in input(1).
error: command 'swig' failed with exit status 1
It must be simpler than this. Suggestions?
I can't help with your above problem but if you relax the constrain of not using Conda and follow these instructions I can say that it will work since I did it a few weeks ago on RHEL 7 using the cntk release 2.2.
Note, release 2.2 assumes that /var/lock is writable which is not true for me. If you follow the instructions above you need to open CrossProcessMutex.h and replace /var/lock/ by a writable directory.

pystan: CompileError: command 'gcc' failed with exit status 1 (Windows)

Before I get to far into this, I should note that I have seen a very similar question, but the solution presented did not work for me. Perhaps one reason why is because that was Linux build and my current difficulty is on a Windows 7 machine. I use Cygwin to get access to the gcc (5.2.0) compiler suite.
In any event, I have been attempting to try out Stan via PyStan. I am working with an Anaconda (2.4.1 64-bit) distribution which I just updated today (Python 2.7.11). I initially tried to install PyStan via pip, but the install keeps failing due to what looks like the following error:
Cannot build msvcr library: "msvcr90d.dll" not found
Consequently, I used conda instead, which seemed to install just fine. (I should note that the conda install pushed my numpy back to an earlier version, which created conflicts with the pandas upon import. I just updated anaconda to deal with these broken dependencies.) I was also able to import PyStan without any problems. However, when I actually tried to fit a model (inside of a Jupyter Notebook), the process failed with the exception in the title.
The first thing I did was confirm that gcc was where in the referenced location (not shown in the title). Indeed it was, and it seemed to working just fine. I then tried to run the model as a script from the command line (still using Python), and it failed with the same error. When I recreated the model via the REPL, it pointed to a different location that had a .bat file referencing the (verified) compiler, and that failed as well.
I am pretty sure this is because I have Visual Studio 2012, instead of Visual Studio 2008. While it is possible for me to run parallel installations, if this code is going to be useful for others in the future, these are not reasonable hoops to jump through to make it happen. I was hoping that someone else might have a better explanation. Any info would be greatly appreciated.
Beneficial from the post at https://github.com/stan-dev/pystan/issues/306
I have met various error message, but finally, I install PyStan successfully.
My machine is also on Windows 7, x64 with Anaconda3 installed.Here are the procedures to install PyStan from the sourced codes.
Install Visual Studio 2017 & Visual Studio C++ Build Tool 2015 at http://landinghub.visualstudio.com/visual-cpp-build-tools
Update Conda
conda update conda
conda update --all
check the dependencies
pip install setuptool
conda install numpy cython matplotlib scipy pandas
Install gcc compiler components
conda install libpython
conda install -c msys2 m2w64-toolchain=5.3.0
created distutils.cfg file inside Anaconda3\Lib\distutils folder with the following:
[build]
compiler = mingw32
Download Git at https://git-scm.com/downloads
git clone --recursive https://github.com/stan-dev/pystan.git
Compile from the source code
python setup.py build --compiler=mingw32
python setup.py install
P.S. The solution for the issue: Cannot build msvcr library: "vcruntime140d.dll" not found.
Copy vcruntime140d.dll from C:\Windows\System32 to any folder, which is reachable in the path in the advanced system settings/environment variables/ system variables.

Switching gcc version on mac

I have the newest XCode (4D199) installed and in terminal when I type
new-host-2: me$ gcc -version
i686-apple-darwin11-llvm-gcc-4.2: no input files
Is that the default xcode/mac gcc compiler version? Because when I try to do a
sudo easy_install cython
I get:
Running Cython-0.15.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-qS3Kqb/Cython-0.15.1/egg-dist-tmp-Zh0Vnv
cc1: error: unrecognized command line option "-arch"
cc1: error: unrecognized command line option "-arch"
I've read that -arch is a Apple GCC compiler only function. I think when I installed a port from macports I remember it installing something called "llvm" and now I suspect that that is being used instead of the one that comes with XCODE.
Any way to switch it back?
Oh, and when I type "sudo port select gcc" I get (this might be relevent to knowing which gcc version I have):
Available versions for gcc:
apple-gcc42
gcc42
llvm-gcc42
mp-gcc44
mp-llvm-gcc42
none
Does sound like you're getting a non-apple version. If you don't need any non-standard compilers, I'd remove any that macports has installed. The apple infrastructure is different enough that using compilers from macports causes grief fairly easily.
This is not extremely related to your problem but you will find a solution here: Can't install Ruby under Lion with RVM – GCC issues
This answer was edited multiple times and now contains three alternative solutions. Skip to the end and try the simple “edit 3” solution first, it seems to work for most people.
You need a non-LLVM version of GCC, which is no longer included with XCode 4.2. Install it yourself (or downgrade to XCode 4.1 temporarily), then do CC=/usr/local/bin/gcc-4.2 rvm install 1.9.3 (substituting the path to your non-LLVM gcc).
Edit: https://github.com/kennethreitz/osx-gcc-installer/downloads may help for installing GCC.
Edit 2 (apparently the easiest solution): Alternatively you can try to add --with-gcc=clang to the arguments to configure for Ruby to use clang.
Edit 3: rvm install 1.9.3 --with-gcc=clang does that for you.

Categories