I am trying to run theano on ubuntu which requires libatlas.
I have already installed libatlas but I can find it in /usr/lib/atlas-base
I have also copied all of the files to a new folder called /atlas:
cp -a /usr/lib/atlas-base/* /usr/lib/atlas
But still, when I run the python code I see:
/usr/bin/ld: cannot find -latlas
/usr/bin/ld: cannot find -l477blas
/usr/bin/ld: cannot find -lcblas
I also tried adding to environment variables but didn't work:
set LIBPATH = [BUILD_LIB_DIR, /usr/lib/atlas]
Also I tried adding the path path to ld file:
/usr/lib/atlas
or
/usr/lib/atlas-base
None of them worked and I still see the error running the Python code.
To change how Theano link to BLAS, you need to use Theano flags[1]. They can be set with an environment variable THEANO_FLAGS or with a configuration file.
How did you told Theano to use atlas? If you just installed the atlas packages, it won't work. You need to install the libatlas-dev pacakge as per this Theano installation instruction for Ubuntu[2]
A last point, we don't recommand atlas, especially for Ubuntu. OpenBLAS is packaged for Unbuntu and is faster. See [2] for detail on how to installed them. You will need to remove atlas before installing openblas, otherwise, there will be conflict.
[1]http://www.deeplearning.net/software/theano/library/config.html#envvar-THEANO_FLAGS
[2]http://www.deeplearning.net/software/theano/install_ubuntu.html#install-ubuntu
Related
I'm trying to install python on RHEL7, which requires making python from source. When trying to do that I'm running into this error
/usr/bin/ld: skipping incompatible /usr/lib/gcc/x86_64-redhat-linux/4.8.5/../../../libsqlite3.so when searching for -lsqlite3
/usr/bin/ld: skipping incompatible //lib/libsqlite3.so when searching for -lsqlite3
/usr/bin/ld: skipping incompatible //usr/lib/libsqlite3.so when searching for -lsqlite3
/usr/bin/ld: cannot find -lsqlite3
collect2: error: ld returned 1 exit status
warning: building with the bundled copy of libffi is deprecated on this platform. It will not be distributed with Python 3.7
Python build finished successfully!
The necessary bits to build these optional modules were not found:
_bz2 _curses _curses_panel
_lzma _tkinter readline
To find the necessary bits, look in setup.py in detect_modules() for the module's name.
The following modules found by detect_modules() in setup.py, have been
built by the Makefile instead, as configured by the Setup files:
atexit pwd time
Failed to build these modules:
_sqlite3
running build_scripts
when I try to make.
When I look in those paths this is what I find, also I have sqlite3 installed:
[brad#reason Downloads]$ ls /usr/lib/gcc | grep sql
[brad#reason Downloads]$ ls /lib | grep sql
libodbcpsqlS.so
libodbcpsqlS.so.2
libodbcpsqlS.so.2.0.0
libsqlite3.so
libsqlite3.so.0
libsqlite3.so.0.8.6
[brad#reason Downloads]$ ls /usr/lib | grep sql
libodbcpsqlS.so
libodbcpsqlS.so.2
libodbcpsqlS.so.2.0.0
libsqlite3.so
libsqlite3.so.0
libsqlite3.so.0.8.6
[brad#reason Downloads]$ sqlite3
SQLite version 3.7.17 2013-05-20 00:56:22
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite>
I'm not sure what I'm doing wrong. Clearly there is no lsqlite3.so library file, but I'm not sure where to get it. I've read that I need to install sqlite-devel for RHEL, but when I try to do that it seems the required repo is missing. I think my employer altered the repo list. How can I see if this installed? I've tried to install it from RPMs but I think it failed (long list of dependencies required).
[root#reason Downloads]# yum install -y sqlite sqlite-devel
Loaded plugins: downloadkvmonly-background, ibm-check-lotus-updates, ibm-check-upgrade, ibm-check-xorg-updates, ibm-repository, langpacks, post-transaction-actions, refresh-packagekit, versionlock
Cannot reach IBM Intranet network. Please ensure you have an active IBM connection.
http://people.centos.org/tru/devtools-2/7Workstation/x86_64/RPMS/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
Trying other mirror.
To address this issue please refer to the below knowledge base article
https://access.redhat.com/articles/1320623
If above article doesn't help to resolve this issue please open a ticket with Red Hat Support.
One of the configured repositories failed (testing 2 devtools for CentOS 7Workstation),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=testing-devtools-2-centos-7Workstation ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable testing-devtools-2-centos-7Workstation
or
subscription-manager repos --disable=testing-devtools-2-centos-7Workstation
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=testing-devtools-2-centos-7Workstation.skip_if_unavailable=true
failure: repodata/repomd.xml from testing-devtools-2-centos-7Workstation: [Errno 256] No more mirrors to try.
http://people.centos.org/tru/devtools-2/7Workstation/x86_64/RPMS/repodata/repomd.xml: [Errno 14] HTTPS Error 404 - Not Found
I expect to be able to build python and for _sqlite3 module to build. Currently _sqlite3 fails to build which results in import sqlite3 to not work in python, which it should. I've considered just installing python in a docker container but I don't think that will quite do what I need.
#some-programmer-dude was correct. My sqlite3 install was of a 32 bit version. I just downloaded the source, built it, make, and install. Now I no longer have a missing _sqlite3 library. I made the mistake of using the precompiled linux binaries from the download page, as they only precompiled a 32 bit version. I should have just built it from the start. Thanks some-programmer-dude.
For installing dlib, I followed this tutorial : http://www.pyimagesearch.com/2017/03/27/how-to-install-dlib/.
I am on Mac OS X 10.12.5 and using Python 3.5.
I run
$ brew install cmake
$ brew install boost
$ brew install boost-python --with-python3
It works without any error.
But when I try to install dlib with pip install dlib. I have an error :
The C compiler
"/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc"
is not able to compile a simple test program.
error: cmake configuration failed
ld: can't map file, errno=22 file '/usr/local/opt/qt/lib' for architecture x86_64
For the full error, please see on this link (doesn't want to paste the full error) :
https://gist.github.com/alexattia/3e98685310d90b65031db640d3ea716a
After retracing the error, when I tried to make dlib manually, I have this :
Linking C executable cmTC_05e45
/usr/local/Cellar/cmake/3.8.2/bin/cmake -E cmake_link_script
CMakeFiles/cmTC_05e45.dir/link.txt --verbose=1
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc
-Wl,-search_paths_first -Wl,-headerpad_max_install_names
/usr/local/opt/qt/lib CMakeFiles/cmTC_05e45.dir/testCCompiler.c.o -o
cmTC_05e45
For the full trace expand : https://gist.github.com/alexattia/1e54ffb87c9eb4c811033f5cadd90331
I reinstalled XCode (from Apple Store) and CMake (3.8.2 from the downloaded page), I even installed Qt Creator to have a clean version of Qt, but I still have the same error.
I tried to install it with conda but after the installation, I still don't have the module in python.
Thank you very much for any help.
You commented:
Indeed, in my .bash_profile, I have export LDFLAGS="/usr/local/opt/qt/lib",
export CPPFLAGS="/usr/local/opt/qt/include", export PATH="/usr/local/opt/qt/bin:$PATH".
But even while commenting it, I still have the same error
Neither of your assignments to LDFLAGS or CPPFLAGS makes sense, and the
first one is the cause the linker failure that concerns you.
The value of the environment variable LDFLAGS, if set, is interpreted by your build system
as linkage options. Likewise The value of the environment variable
CPPFLAGS, if set, is interpreted as preprocessor options.
/usr/local/opt/qt/lib is not a linkage option and /usr/local/opt/qt/include
is not a preprocessor option. These are simply directory names. Any argument that
you pass to the linker (or preprocessor, or compiler) that is not an option is
interpeted by the tool as an input file. Thus you have led the linker to believe
that /usr/local/opt/qt/lib is an input file to your linkage.
ld: can't map file, errno=22 file '/usr/local/opt/qt/lib' for architecture x86_64
is what the linker says when it discovers that /usr/local/opt/qt/lib is not
a file at all.
Presumably, you wish to instruct the linker that /usr/local/opt/qt/lib is
a directory in which it should search for libraries required by your linkage.
The linkage option that expresses that intent is:
-L/usr/local/opt/qt/lib
Here are the GCC options for linking
Similarly you intend to instruct that preprocessor that /usr/local/opt/qt/include
is a directory in which it should search for header files. The preprocessor
option to express that is:
-I/usr/local/opt/qt/include
Here are the GCC options for preprocessing
It is abnormal and inadvisable to specify compilation or linkage options
in your bash login profile, as you are doing. Specify such options in the
build system's input files (makefile, cmakelists file or similar), or as arguments to
the build system's configuration. But if you insist on specifying them in
your bash login profile, then you should specify:
LDFLAGS=-L/usr/local/opt/qt/lib
CPPFLAGS=-I/usr/local/opt/qt/include
And once you have made these environment settings in your bash_profile they will
only take effect in new login shells.
I had a similar issue but found out it was due to boost.
Try this.
brew uninstall boost-python
brew uninstall boost
brew install boost-python --with-python3 --without-python
pip3 install dlib
Fresh install of anaconda python3 onto secondary hard drive of mac running mavericks.
import sklearn
gives
Library not loaded: /usr/local/lib/libgcc_s.1.dylib
Referenced from: /Volumes/SecondHD/anaconda/lib/python3.5/site-packages/scipy/sparse/linalg/isolve/_iterative.so
Reason: image not found
gcc was installed with home-brew and exists.
which gcc
gives
/usr/bin/gcc
In /usr/local/Cellar/gcc/6.1.0/lib/gcc/6 I can find libgcc_s.1.dylib so I know it's there even tho it was not symlinked in /usr/local/lib.
Rather than adding more symlinks to /usr/local/lib from all the libraries in Cellar, I instead added the location of the libraries to the search path.
In my ~.profile I have
export LIBRARY_PATH="$LIBRARY_PATH:/usr/local/lib"
export LIBRARY_PATH="$LIBRARY_PATH:/usr/local/Cellar/gcc/6.1.0/lib/gcc/6"
But that does not work. However, the error goes away if I add this line to my .profile
export DYLD_FALLBACK_LIBRARY_PATH=/usr/local/Cellar/gcc/6.1.0/lib/gcc/6
My understanding from this post is that LIBRARY_PATH is a list of places a compiler (like gcc) will look for libraries when it is linking code. But in Mac OSX, DYLD_LIBRARY_PATH and DYLD_FALLBACK_LIBRARY_PATH contain a list of places any program will search for a shared library when it runs.
So if sklearn wants a gcc library, that would mean some compilation (and linking) will happen. Why is this line not sufficient
export LIBRARY_PATH="$LIBRARY_PATH:/usr/local/Cellar/gcc/6.1.0/lib/gcc/6"
and why is DYLD_FALLBACK_LIBRARY_PATH or DYLD_LIBRARY_PATH needed?
I had the same problem. What I did was creating symbolic links from my cellar folder of gcc to /usr/local/lib.
look for the right path of you gcc
ln -s /usr/local/Cellar/gcc/X.X.X/lib/gcc/6/* /usr/local/lib
I am building YouCompleteMe plugin of vim, following this document. When I run make I get the following error.
Linking CXX shared library /home/sagar/.vim/bundle/YouCompleteMe/python/ycm_core.so
/usr/bin/ld: /usr/local/lib/libpython2.7.a(abstract.o): relocation R_X86_64_32S against `_Py_NotImplementedStruct' can not be used when making a shared object; recompile with -fPIC
/usr/local/lib/libpython2.7.a: could not read symbols: Bad value
collect2: error: ld returned 1 exit status
What is this error?
I have installed pyenv to manage python versions. Is it causing problem?
Make the linker point to the .so (shared object) file and not the .a (static lib) file.
You can do this specifying the flag when running cmake:
cmake -G "Unix Makefiles" -DPYTHON_LIBRARY=/usr/local/lib/libpython2.7.so . ~/.vim/bundle/YouCompleteme/cpp
Do mind that even though you're using pyenv, YouCompleteMe build may point to an undesired
python build as they are not correctly auto-detected right now.
If you're having this problem, you should probably also specify the Python header files correctly:
cmake -G "Unix Makefiles" -DPYTHON_LIBRARY=/usr/local/lib/libpython2.7.so -DPYTHON_INCLUDE_DIR=/usr/local/include/python . ~/.vim/bundle/YouCompleteme/cpp
PS=(I'm assuming your headers are in that path, do check before)
Since some paths were different on my system from the accepted answer (both the CMake and the python lib ones) I'm posting an alternate solution for the above problem:
Make sure to have a shared library version of libpython2.7.so
$ locate libpython
/usr/lib/x86_64-linux-gnu/libpython2.7.so.1
Either create a symlink to it from where CMake expects it to be
sudo ln -s "/usr/lib/x86_64-linux-gnu/libpython2.7.so.1" "/usr/lib/libpython2.7.so"
or alternatively, as written in YCM's build script code, you could add additional CMake options to ensure the .so library is properly found
export EXTRA_CMAKE_ARGS="-DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython2.7.so.1"
I'm trying to install Theano on Enthought Python Distribution (EPD), but I am getting a weird error. Here is what my installation looks like:
I have installed EPD to C:\Python27.
After that, I have installed pip by using easy_install pip
I installed Theano by using pip install Theano
To test, I start ipython and type import theano. I get the following error:
Problem occurred during compilation with the command line below:
g++ -shared -g -IC:\Python27\lib\site-packages\numpy\core\include -IC:\Python27\include -o C:\Users\Ove\AppData\Local\Theano\compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_37_Stepping_5_GenuineIntel-2.7.2\lazylinker_ext\lazylinker_ext.pyd C:\Users\Ove\AppData\Local\Theano\compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_37_Stepping_5_GenuineIntel-2.7.2\lazylinker_ext\mod.cpp -LC:\Python27\libs -LC:\Python27 -lpython27
C:\Users\Ove\AppData\Local\Temp\ccIoNPlU.o: In function `initlazylinker_ext':C:/Users/Ove/AppData/Local/Theano/compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_37_Stepping_5_GenuineIntel-2.7.2/lazylinker_ext/mod.cpp:911: undefined reference to `__imp_Py_InitModule4'
collect2: ld returned 1 exit status
Exception: Compilation failed (return status=1): C:\Users\Ove\AppData\Local\Temp. C:/Users/Ove/AppData/Local/Theano/compiledir_Windows-7-6.1.7601-SP1-Intel64_Family_6_Model_37_Stepping_5_GenuineIntel-2.7.2/lazylinker_ext/mod.cpp:911: undefi. collect2: ld returned 1 exit status4'
Does anyone know how to get Theano to run with EPD?
The last release of Theano(0.5) has some problem on Windows. You need to install the bleeding edge version. You can update your version like this:
pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
This should solve the problem. If not, you probably have some conflict with a different installation of gcc. Do you have installed it with cygwin or mingw? EPD installs its own version of mingw.
I couldn't get Theano working with Enthought, but using the Anaconda python distribution I eventually got it working. Here's how:
uninstall Enthought and any other python version (start from scratch)
download and install Anaconda python distribution from this link: http://09c8d0b2229f813c1b93-c95ac804525aac4b6dba79b00b39d1d3.r79.cf1.rackcdn.com/Anaconda-1.5.0-Windows-x86_64.exe and click the option to use Anaconda as your default python version
to get the academic license, go to this page: https://store.continuum.io/cshop/academicanaconda and click the "free" button next to Anaconda Academic License (right side of page)
you should receive an email with an academic license .txt file. Follow the instructions in the email to place the file in the correct directory, and run several command-line commands to update anaconda and install numpy and scipy
open a windows command prompt and type
pip install theano
create a file .theanorc.txt containing the lines:
[global]
openmp=False
[blas]
ldflags=
place .theanorc.txt in your home folder (the folder for your user account)
make sure the following paths are added to your PATH environment variable:
C:\Anaconda\MinGW\bin;
C:\Anaconda\MinGW\x86_64-w64-mingw32\lib;
C:\Anaconda;
C:\Anaconda\Scripts;