I have an account in a remote computer without root permissions and I needed to install a local version of Python (the remote computer has a version of Python that is incompatible with some codes I have), Numpy and Scipy there. I've been trying to install numpy locally since yesterday, with no success.
I successfully installed a local version of Python (2.7.3) in /home/myusername/.local/, so I access to this version of Python by doing /home/myusername/.local/bin/python. I tried two ways of installing Numpy:
I downloaded the lastest stable version of Numpy from the official webpage, unpacked it, got into the unpacked folder and did: /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local. However, I get the following error, which is followed by a series of other errors (deriving from this one):
gcc -pthread -shared build/temp.linux-x86_64-2.7/numpy/core/blasdot/_dotblas.o
-L/usr/local/lib -Lbuild/temp.linux-x86_64-2.7 -lptf77blas -lptcblas -latlas
-o build/lib.linux-x86_64-2.7/numpy/core/_dotblas.so
/usr/bin/ld: /usr/local/lib/libptcblas.a(cblas_dptgemm.o): relocation
R_X86_64_32 against `a local symbol' can not be used when making a shared
object; recompile with -fPIC
Not really knowing what this meant (except that the error apparently has to do with the LAPACK library), I just did the same command as above, but now putting LDFLAGS='-fPIC', as suggested by the error i.e., I did
LDFLAGS="-fPIC" /home/myusername/.local/bin/python setup.py install --prefix=/home/myusername/.local.
However, I got the same error (except that the prefix -fPIC was addeded after the gcc command above).
I tried installing it using pip, i.e., doing /home/myusername/.local/bin/pip install numpy /after successfully instaling pip in my local path). However, I get the exact same error.
I searched on the web, but none of the errors seemed to be similar to mine. My first guess is that this has to do with some piece of code that needs root permissions to be executed, or maybe with some problem with the version of the LAPACK libraries.
Help, anyone?
The error message is telling you that your ATLAS library has not been built with the -fPIC flag. That means it cannot be linked into a shared library like Python extension modules. You need to rebuild ATLAS with the -fPIC flag. The ATLAS documentation describes how to do so.
It's kind of a pain to build from source. Is it possible to avoid doing that?
If we assume that you are trying to install on an x86 computer (Intel, AMD, whatever), can you just install Python on another x86 computer where you do have root, then make a tar archive of the Python installation, copy the tar to the other computer, and unpack the tar archive?
The problem with the above is that the pre-built Python might have hard-coded paths for where to look for libraries: it might need the libraries to be in /usr/share or whatever. It would be a bit of a hack, but you might be able to make a chroot jail and get Python to run.
You might also want to take a look at Enthought Python Distribution (EPD). I believe the EPD installer simply asks you where you want EPD installed, and installs it there.
http://www.enthought.com/products/epdgetstart.php?platform=linux
There is a free version of EPD. If you want 64-bit you would have to pay for EPD, but if 32-bit will work for you, EPD Free might be all you need.
http://www.enthought.com/products/epd_free.php
P.S. The Enthought web site seems to be rejecting any URL that doesn't start with www.! This means that some Google search links don't work unless you edit them to insert the www. at the beginning. I'm sure they will fix this soon.
You may want to look into EasyBuild for building your local Python version with numpy and scipy enabled, see http://hpcugent.github.com/easybuild/.
It basically takes all the nasty stuff away from you, you just need to configure it a little bit (specify where you want the software to end up, for exaple), and then you can build Python with the packages of your choice with a single command.
Related
I've been trying to install this ATLAS tool on my windows computer. The instructions are very simple and straight forward:
clone the ATLAS git repository: $ git clone https://gitlab.inria.fr/alta/alta.git
I should have all the mandatory dependencies installed:
-the SCons build system;
-a C++11 compiler, such as recent versions of GCC or Clang;
-Eigen >= 3.0 (libeigen3-dev package on Debian and derivatives; libeigen3 in MacPorts; eigen in Brew.)
Essentially, after I have those installed, I can run scons on python and it should check to see whether the required dependencies are met, and then all compilation byproducts will go to the sources/build like the instructions says. The problem is after running the scons command, I get the following response:
scons: Reading SConscript files ...
<<INFO>> Using config file "./configs/scons/config-windows-cl.py"
the current platform is: win32
Checking for C++ library dl... no
Checking for C++ library rt... no
Checking whether 'c++11' is supported... yes
Checking for eigen3 using pkg-config... no
Checking for C++ header file Eigen/Core... no
obtaining Eigen v3.2.7
error: downloaded file 'eigen-3.2.7.tar.gz' is inauthentic
error: got sha256 hash ea25f177c8716e7daa618533e116706d97e25c9912e016009d8a9264e39cad57 but expected 5a50a006f83480a31f1f9beabec9e91dad95138df19363ee73ccf57676f10405
eigen-3.2.7.tar.gz: downloaded file is inauthentic
The compilation process results in a eigen-3.2.7.tar.gz file with a WRONG-HASH File type. Moreover, when I open the file, it reads, "Repository eigen/eigen not found".
What does it mean that the eigen-3.2.7.tar.gz file is inauthentic and why does it have a WRONG-HASH File type? My guess is that my machine is complaining that the eigen repository is not downloaded, but I thought I installed everything correctly.
Here how I went about installing the dependencies:
Scons
I installed Scons build system by simply typing the following command in my anaconda python environment: conda install -c conda-forge scons
C++ complier
This was actually already installed on my computer a while back ago. I can't exactly remember how it was installed, but my machine seems to recognize it on the checklist so no need to worry about that.
Eigen
To install this dependency I just simply cloned the repository from here in GitHub. The Eigen folder is find inside the alta directory(the highest level directory.
I new to this, so it could be very possible that my steps to install these dependencies were not correct. Should I set some sort of environment path? I'm wondering if I installed my eigen repository correctly. To be honest, i'm not exactly sure why the build process fail, thus the issue may be something totally different then how I installed my dependencies. However, at this point I am lost and in need of further instruction or intuition.
The link to the installation page is here . As you can see its not many instruction and they are quite simple, which makes this whole thing even more frustrating.
It doesn't sound like there's a lot wrong here... for Windows, the results look normal: libdl and librt are linux-y things. And Windows platforms also don't have the pkg-config way of getting information about building with a library, so those configure results are nothing to worry about. It just sounds like the fetcher tool isn't resilient to the thing it needs to fetch already being there. You want to look at the external area as to why it's deciding to, and then unhappy with, fetching something that's already in place. Maybe you weren't supposed to git clone that piece in the first place? The instructions you point to hint not: "If Eigen could not be found, it is automatically downloaded from upstream."
As to your problem of not finding a build subdirectory, your guess is correct: scons is basically two-pass, the first pass being to read the config files and build up the dependency tree, the second being to do any required builds. Dependency fetching must be done in the first pass in this project (there are ways to code such an animal to be a build-time thing, but that's harder so most projects don't), so once the dep checking failed, it never went on the the build phase, thus the build directory was never created.
I think you need to remove your eigen download.
According to: http://alta.gforge.inria.fr/install.html
If Eigen could not be found, it is automatically downloaded from upstream.
The downloaded file is integrity-checked, and then the software is built and
installed under the external/build sub-directory of the source tree.
What I'm trying to do is ship my code to a remote server, that may have different python version installed and/or may not have packages my app requires.
Right now to achieve such portability I have to build relocatable virtualenv with interpreter and code. That approach has some issues (for example, you have to manually copy a bunch of libraries into your virtualenv, since --always-copy doesn't work as expected) and generally slow.
There's (in theory) a way to build python itself statically.
I wonder if I could pack interpreter with my code into one binary and run my application as module. Something like that: ./mypython -m myapp run or ./mypython -m gunicorn -c ./gunicorn.conf myapp.wsgi:application.
There are two ways you could go about to solve your problem
Use a static builder, like freeze, or pyinstaller, or py2exe
Compile using cython
This answer explains how you can go about doing it using the second approach, since the first method is not cross platform and version, and has been explained in other answers. Also, using programs like pyinstaller typically results in huge file sizes, while using cython will result in a file that's much smaller
First, install cython.
sudo -H pip3 install cython
Then, you can use cython to generate a C file out of the Python .py file
(in reference to https://stackoverflow.com/a/22040484/5714445)
cython example_file.py --embed
Use GCC to compile it after getting your current python version (Note: The below assumes you are trying to compile it to Python3)
PYTHONLIBVER=python$(python3 -c 'import sys; print(".".join(map(str, sys.version_info[:2])))')$(python3-config --abiflags)
gcc -Os $(python3-config --includes) example_file.c -o output_bin_file $(python3-config --ldflags) -l$PYTHONLIBVER
You will now have a binary file output_bin_file, which is what you are looking for
Other things to note:
Change example_file.py to whatever file you are actually trying to compile.
Cython is used to use C-Type Variable definitions for static memory allocation to speed up Python programs. In your case however, you will still be using traditional Python definitions.
If you are using additional libraries (like opencv, for example), you might have to provide the directory to them using -L and then specify the name of the library using -l in the GCC Flags. For more information on this, please refer to GCC flags
The above method might not work for anaconda python, as you will likely have to install a version of gcc that is compatible with your conda-python.
You might wish to investigate Nuitka. It takes python source code and converts it in to C++ API calls. Then it compiles into an executable binary (ELF on Linux). It has been around for a few years now and supports a wide range of Python versions.
You will probably also get a performance improvement if you use it. Recommended.
You're probably looking for something like Freeze, which is able to compile your Python application with all its libraries into a static binary:
PyPi page of Freeze
Python Wiki page of Freeze
Sourceforge page of Freeze
If you are on a Mac you can use py2app to create a .app bundle, which starts your Django app when you double-click on it.
I described how to bundle Django and CherryPy into such a bundle at https://moosystems.com/articles/14-distribute-django-app-as-native-desktop-app-01.html
In the article I use pywebview to display your Django site in a local application window.
Freeze options:
https://pypi.python.org/pypi/bbfreeze/1.1.3
http://cx-freeze.sourceforge.net/
However, your target server should have the environment you want -> you should be able to 'create' it. If it doesn't, you should build your software to match the environment.
I found this handy guide on how to install custom version of python to a virtualenv, assuming you have ssh access: https://stackoverflow.com/a/5507373/5616110
In virtualenv, you should be able to pip install anything and you shouldn't need to worry about sudo privileges. Of course, having those and access to package manager like apt makes everything a lot easier.
I have created a docker image that relies on Nuitka and a custom statically linked python3.10 to create a static binary.
Did not test it extensively, if you have the chance please let me know if it works for your use case.
You can check it at:
https://github.com/joaompinto/docker-build-python-static-bin
I want to send somebody my compiled fortran extension on a Mac (compiled with f2py and gfortran).
Problem is that it doesn't work on other Macs unless they also instal xcode (2 GB, yikes!) and gfortran. So apparently there are some additional files missing when I just send the compiled extension.
Does anybody know what other files to include or (better) how to compile a fortran extension without needing to send any additional files?
Thanks,
Mark
Well, when you compile a module with f2py, it essentially creates a dynamic library (.so) which uses your system libraries. For instance, on my computer, the linking step is,
gfortran [...] -lpython2.7 -lgfortran -o ./my_f2py_module.so
Therefore, if you want to be able to execute the resulting f2py module on a different computer (assuming the same architecture), libpython2.7.so and libgfortran.so should be available there.
I don't know much about OS/X deployment, but I think you should
use a compiler present by default on Mac (i.e. clang), which should work with f2py. Or alternatively install gfortran on both systems.
make sure you link to the same version of python in both cases
Also use ldd ./my_f2py_module.so to list all the libraries that it is linked to. If some of them cannot be found on the current system, you will also see it with this command.
I installed PyCrypto on Windows via pip but i was not able to build Crypto.PublicKey._fastmath because GMP was not found.
I know there is a binary version on voidspace but i would like to build the latest version of PyCrypto
The following one is a way to achieve your goal. There are other, probably better ways (e.g. based on Visual Studio), but this one has worked for me. Additionally, it does not use pip.
All operations are carried out on a command prompt.
Install Mingw, including MSYS and the Development Toolkit. This will give you a fairly complete Unix-like development environment.
Ensure that Mingw binaries are in PATH environment variable. You need MinGW\bin and MingGW\msys\1.0\bin.
Download MPIR sources in a temporary directory. It is important you do not use 2.5.1 because of a bug that will break the build. 2.5.0 is fine.
Build the MPIR library. This is fairly straightforward: execute bash configure followed by make.
HACK #1 Copy libmpir.a from mpir-2.5.0\.libs into C:\Python2.7.1\libs. This is necessary because distutils is broken and I could not find a way to direct it to the correct library location.
HACK #2 Edit C:\Python2.7.1\Lib\distutils\cygwincompiler.py and remove any occurrance of the string -mno-cygwin. The reason is explained here.
Download PyCrypto sources and unpack them in another temporary directory.
Set CPPFLAGS environment variable to the MPIR directory, which contains mpir.h.
HACK 3 Edit setup.py and add the following line in build_extension method:
self.__add_compiler_option(os.environ['CPPFLAGS'])
Run bash configure. You should see two lines saying:
checking for __gmpz_init in -lgmp... no
checking for __gmpz_init in -lmpir... yes
Execute python setup.py build -c mingw32. You should see no errors.
Execute python setup.py test to verify that everything is fine.
Execute python setup.py install to copy the files into your local Python repository.
Alternatively, run python setup.py bdist_wininst to create an installer.
I really hate all the various hacks, and I'd love to hear if they can be avoided.
I'm trying to build a python extension DLL on a 64bit Win7 machine using cygwin (as cygwin only run as 32bit process, this is actually cross-compiling).
I created libpython27.a myself from python27.dll using dlltool (as explained, for example, here), but the build fail during the linker phase saying
skipping incompatible c:\Python27\libs/libpython27.a when searching for -lpython27
This is exactly the error reported here (where the guy ended up moving to MSVC compiler...).
More info:
- Active Python 2.7.2, win64, x64
- latest version of cygwin, using the /usr/bin/x86_64-w64-mingw32-g++.exe compiler
Does anyone know if this is supported?
Is there way to use dlltool which I miss here?
(I did found here the guidance to use
dlltool --as-flags=--64 -m i386:x86-64 -k -l libpython27.a -d python.def
but when doing so I got "invalid bfd target" error from dlltool)
Thanks!
Update: I believe it can be done because Enthought python contains such a file. I would like to create one for the more common distributions which don't contain it.
The problem is that you are using the 32 bit dlltool. Probably in C:\MinGW\bin instead of C:\MinGW64\bin. You can change your path, or run the 64 bit tool specifically as such:
C:\MinGW64\bin\dlltool -v --dllname python27.dll --def python27.def --output-lib libpython27.a
I'm not sure how helpful you find this, but at the bottom of the page you linked to there's a link to here - Where it says:
Do not use MinGW-w64. As you will notice, the MinGW import library for
Python (e.g. libpython27.a) is omitted from the AMD64 version of
Python. This is deliberate. Do not try to make one using dlltool.
There is no official MinGW-w64 release yet, it is still in "beta" and
considered unstable, although you can get a 64-bit build from e.g.
TDM-GCC. There have also been issues with the mingw runtime
conflicting with the MSVC runtime; this can happen from places you
don't expect, such as inside runtime libraries for g++ or gfortran. To
stay on the safe side, avoid MinGW-w64 for now.