I've been trying to install this ATLAS tool on my windows computer. The instructions are very simple and straight forward:
clone the ATLAS git repository: $ git clone https://gitlab.inria.fr/alta/alta.git
I should have all the mandatory dependencies installed:
-the SCons build system;
-a C++11 compiler, such as recent versions of GCC or Clang;
-Eigen >= 3.0 (libeigen3-dev package on Debian and derivatives; libeigen3 in MacPorts; eigen in Brew.)
Essentially, after I have those installed, I can run scons on python and it should check to see whether the required dependencies are met, and then all compilation byproducts will go to the sources/build like the instructions says. The problem is after running the scons command, I get the following response:
scons: Reading SConscript files ...
<<INFO>> Using config file "./configs/scons/config-windows-cl.py"
the current platform is: win32
Checking for C++ library dl... no
Checking for C++ library rt... no
Checking whether 'c++11' is supported... yes
Checking for eigen3 using pkg-config... no
Checking for C++ header file Eigen/Core... no
obtaining Eigen v3.2.7
error: downloaded file 'eigen-3.2.7.tar.gz' is inauthentic
error: got sha256 hash ea25f177c8716e7daa618533e116706d97e25c9912e016009d8a9264e39cad57 but expected 5a50a006f83480a31f1f9beabec9e91dad95138df19363ee73ccf57676f10405
eigen-3.2.7.tar.gz: downloaded file is inauthentic
The compilation process results in a eigen-3.2.7.tar.gz file with a WRONG-HASH File type. Moreover, when I open the file, it reads, "Repository eigen/eigen not found".
What does it mean that the eigen-3.2.7.tar.gz file is inauthentic and why does it have a WRONG-HASH File type? My guess is that my machine is complaining that the eigen repository is not downloaded, but I thought I installed everything correctly.
Here how I went about installing the dependencies:
Scons
I installed Scons build system by simply typing the following command in my anaconda python environment: conda install -c conda-forge scons
C++ complier
This was actually already installed on my computer a while back ago. I can't exactly remember how it was installed, but my machine seems to recognize it on the checklist so no need to worry about that.
Eigen
To install this dependency I just simply cloned the repository from here in GitHub. The Eigen folder is find inside the alta directory(the highest level directory.
I new to this, so it could be very possible that my steps to install these dependencies were not correct. Should I set some sort of environment path? I'm wondering if I installed my eigen repository correctly. To be honest, i'm not exactly sure why the build process fail, thus the issue may be something totally different then how I installed my dependencies. However, at this point I am lost and in need of further instruction or intuition.
The link to the installation page is here . As you can see its not many instruction and they are quite simple, which makes this whole thing even more frustrating.
It doesn't sound like there's a lot wrong here... for Windows, the results look normal: libdl and librt are linux-y things. And Windows platforms also don't have the pkg-config way of getting information about building with a library, so those configure results are nothing to worry about. It just sounds like the fetcher tool isn't resilient to the thing it needs to fetch already being there. You want to look at the external area as to why it's deciding to, and then unhappy with, fetching something that's already in place. Maybe you weren't supposed to git clone that piece in the first place? The instructions you point to hint not: "If Eigen could not be found, it is automatically downloaded from upstream."
As to your problem of not finding a build subdirectory, your guess is correct: scons is basically two-pass, the first pass being to read the config files and build up the dependency tree, the second being to do any required builds. Dependency fetching must be done in the first pass in this project (there are ways to code such an animal to be a build-time thing, but that's harder so most projects don't), so once the dep checking failed, it never went on the the build phase, thus the build directory was never created.
I think you need to remove your eigen download.
According to: http://alta.gforge.inria.fr/install.html
If Eigen could not be found, it is automatically downloaded from upstream.
The downloaded file is integrity-checked, and then the software is built and
installed under the external/build sub-directory of the source tree.
Related
The pybind11 documentation is generally good, but one area in which it is not is explaining the install process and the process of getting and running examples using cmake.
I've managed to figure out how to get and build examples. But it leads to more questions. Almost all the examples count on downloading the pybind11 repo into the examples folder and including the root folder of the repo in a cmake run (the root folder contains a cmakelists.txt file). The contents of that repo have a lot in common with the content which is added to a python environment when you install pybind11 using pip or conda. But the folder organization is completely different.
So I'm curious:
Why the difference?
Is there a way to use the content in the environment install in such a way that you don't also have to
download the repo in order to build examples using cmake?
Failing that, what is the best way to put the pybind11 repo in a common place so it doesn't have to be copied all over the place in order to build examples, or in order to provide the important added cmake functionality for one's own code?
I'm really uncomfortable in general not understanding the "how this works" aspect of such things, so this will really help me.
Not sure which examples you mention, but to install pybind11 in your system and use it in different projects, just follow the standard procedure for installing CMake based packages
mkdir build
cd build
cmake ../ # optionally you can specify -DPYBIND11_PYTHON_VERSION=<your python version>
make
sudo make install
Then in one of your other project's CMakeLists.txt you can use it e.g. like this:
find_package(pybind11 CONFIG REQUIRED)
message(STATUS "Found pybind11 v${pybind11_VERSION}: ${pybind11_INCLUDE_DIRS}")
add_library(<name of your lib> MODULE <your sources>)
target_link_libraries(mylib pybind11::module)
For more CMake commands consult pybind11Config.cmake.
Then if you don't want to install it in your system, you can just embed pybind11 repo in your project tree with add_subdirectory instead of find_package. All the offered features will be the same. The package is well designed and it detects whether it is used as master project or not and either defined INSTALL targets or not.
So I guess that your last 2 questions are answered?
FindPythonLibs.cmake is somehow finding Python versions that don't exist/were uninstalled.
When I run find_package(PythonLibs 3 REQUIRED) CMake properly finds my Python3.6 installation and adds its include path, but then I get the error
No rule to make target 'C:/Users/ultim/Anaconda2/libs/python27.lib', needed by 'minotaur-cpp.exe'. Stop.
This directory doesn't exist, and I recently uninstalled Anaconda and the python that came with it. I've looked through my environment variables and registry, but find no reference to this location.
Would anyone know where there might still be a reference to this location?
Since the "REQUIRED" option to find_package() is not working, you can be explicit about which Python library using CMake options with cache variables:
cmake -DPYTHON_INCLUDE_DIR=C:\Python36\include -DPYTHON_LIBRARY=C:\Python36\libs\python36.lib ..
I'm trying to make a conda recipe for ProjectQ. Something weird is going on during the build process under both linux and osx. I can build and install the package by hand (i.e. using 'python setup.py install' from the cloned git repo directory). However, when I make a recipe that does exactly the same thing, it fails. On both linux and osx.
My build recipe is here. What is particularly weird is that even though I specify python 3.6.* under the build requirements in the meta.yaml file, the conda-build procedure names the package "projectq-v0.3.0-py27_0", and when it installs the package, it tries to do so in one of the python 2.7 directories, which is what I assume makes it fail.
So clearly I'm doing something dumb, but I can't for the life of me figure out what. Can anyone see anything I've done wrong? Thanks in advance.
I figured this out. Thanks to everyone who took the time to look over my question. There was truly no way that anyone could have figured this out for me, since it was rather specific to the package I was installing. I'm going to try to summarize what I've learned in case someone else runs into anything different.
First, as I noted in one of the comments, if you're specifying a specific version requirement under the build, you had better specify the same version requirement under run. Initially I had "python 3.6.*" specified under build, but just "python" under run. This caused the package to be named something ending with "-py27_0", since the package name, understandably, depends upon what's required to run it, not to build it.
The really tricky thing was to understand that there were additional requirements specified in the setup.py script that were being installed automatically when I ran "python setup.py install" by hand, but were not being run when I tried to build under conda. Once I added these requirements to the meta.yaml recipe, everything builds and tests fine.
So, the lessons are to be consistent with your conda requirements between build and run, and make sure you have all of the requirements listed, including bonus requirements that may be specified in the setup.py file.
Thanks again to all who looked at this.
I installed a local version of Python 2.7 in my home directory (Linux RedHat) under ~/opt using the --prefix flag.
More specifically, Python was placed in ~/home/opt/bin.
Now, I want to install NumPy, but I am not really sure how I would achieve this. All I found in the INSTALL.txt and online documentation was the command to use the compiler.
I tried gfortran, and it worked without any error message:
python setup.py build --fcompiler=gnu95
However, I am not sure how to install it for my local version of Python.
Also, I have to admit that I don't really understand how this whole approach works in general. E.g., what is the setup.py build doing? Is it creating module files that I have to move to a specific folder?
I hope anyone can give me some help here, and I would also appreciate a few lines of information how this approach works, or maybe some resources where I can read it up (I didn't find anything on the NumPy pages).
Your local version of python should keep all of it's files somewhere in ~/opt (presumably). As long as this is the python installation that gets used when you issue the command
python setup.py build --fcompiler=gnu95
you should be all set because in the sys module, there are a bunch of constants which the setup script uses to determine where to put the modules once they are built.
So -- running python setup.py build issues all of the necessary commands to build the module (compiling the C/Fortran code into shared object libraries that python can load dynamically and copying the pure python code to create the proper directory structure). The module is actually built somewhere in the build subdirectory which gets created during the process if it doesn't already exist. Once the library has been built (successfully), installing it should be as simple as:
python setup.py install
(You might need to sudo if you don't have write privileges in the install directory).
I installed PyCrypto on Windows via pip but i was not able to build Crypto.PublicKey._fastmath because GMP was not found.
I know there is a binary version on voidspace but i would like to build the latest version of PyCrypto
The following one is a way to achieve your goal. There are other, probably better ways (e.g. based on Visual Studio), but this one has worked for me. Additionally, it does not use pip.
All operations are carried out on a command prompt.
Install Mingw, including MSYS and the Development Toolkit. This will give you a fairly complete Unix-like development environment.
Ensure that Mingw binaries are in PATH environment variable. You need MinGW\bin and MingGW\msys\1.0\bin.
Download MPIR sources in a temporary directory. It is important you do not use 2.5.1 because of a bug that will break the build. 2.5.0 is fine.
Build the MPIR library. This is fairly straightforward: execute bash configure followed by make.
HACK #1 Copy libmpir.a from mpir-2.5.0\.libs into C:\Python2.7.1\libs. This is necessary because distutils is broken and I could not find a way to direct it to the correct library location.
HACK #2 Edit C:\Python2.7.1\Lib\distutils\cygwincompiler.py and remove any occurrance of the string -mno-cygwin. The reason is explained here.
Download PyCrypto sources and unpack them in another temporary directory.
Set CPPFLAGS environment variable to the MPIR directory, which contains mpir.h.
HACK 3 Edit setup.py and add the following line in build_extension method:
self.__add_compiler_option(os.environ['CPPFLAGS'])
Run bash configure. You should see two lines saying:
checking for __gmpz_init in -lgmp... no
checking for __gmpz_init in -lmpir... yes
Execute python setup.py build -c mingw32. You should see no errors.
Execute python setup.py test to verify that everything is fine.
Execute python setup.py install to copy the files into your local Python repository.
Alternatively, run python setup.py bdist_wininst to create an installer.
I really hate all the various hacks, and I'd love to hear if they can be avoided.