Docker build: Running setup.py install for mariadb fails - python

I'm building a docker image for python code that must connect with a remote MariaDB server. I was able to run this locally after some trial and error, but to make the installation on a remote (virtual) server easier, I really would like the solution to work too with a docker image.
As soon as I use an older alpine version < 3.11, using a lower version results in compatibility issue:
MariaDB Connector/Python requires MariaDB Connector/C >= 3.1.5, found version 3.0.10
Different python versions also doesn't seem to work.
My Dockerfile:
FROM python:3.6-alpine
RUN apk add --no-cache mariadb-dev build-base
RUN pip install mariadb
Console output:
> docker build . -t dockerpython
Sending build context to Docker daemon 137.5MB
Step 1/7 : FROM python:3.6-alpine3.12
3.6-alpine3.12: Pulling from library/python
Digest: sha256:c228fcf0064d5595b4c7aab92b68598917383fe066dc5e17d2e426b0395c7848
Status: Downloaded newer image for python:3.6-alpine3.12
---> 176f50d88b04
Step 2/7 : RUN apk add --no-cache mariadb-dev build-base
---> Using cache
---> afd8f9e92e7f
Step 3/7 : RUN pip install mariadb
---> Running in 887a7e3ea2f2
Collecting mariadb
Downloading mariadb-1.0.3.tar.gz (66 kB)
Building wheels for collected packages: mariadb
Building wheel for mariadb (setup.py): started
Building wheel for mariadb (setup.py): finished with status 'error'
ERROR: Command errored out with exit status 1:
command: /usr/local/bin/python -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-x63w3ma6/mariadb/setup.py'"'"'; __file__='"'"'/tmp/pip-install-x63w3ma6/mariadb/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-kxbxgt6f
cwd: /tmp/pip-install-x63w3ma6/mariadb/
Complete output (29 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/mariadb
copying mariadb/__init__.py -> build/lib.linux-x86_64-3.6/mariadb
creating build/lib.linux-x86_64-3.6/mariadb/constants
copying mariadb/constants/__init__.py -> build/lib.linux-x86_64-3.6/mariadb/constants
copying mariadb/constants/CLIENT.py -> build/lib.linux-x86_64-3.6/mariadb/constants
copying mariadb/constants/INDICATOR.py -> build/lib.linux-x86_64-3.6/mariadb/constants
copying mariadb/constants/CURSOR.py -> build/lib.linux-x86_64-3.6/mariadb/constants
copying mariadb/constants/FIELD_TYPE.py -> build/lib.linux-x86_64-3.6/mariadb/constants
running build_ext
building 'mariadb._mariadb' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/mariadb
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPY_MARIADB_MAJOR_VERSION=1 -DPY_MARIADB_MINOR_VERSION=0 -DPY_MARIADB_PATCH_VERSION=3 -I/usr/include/mysql -I/usr/include/mysql/mysql -I./include -I/usr/local/include/python3.6m -c mariadb/mariadb.c -o build/temp.linux-x86_64-3.6/mariadb/mariadb.o -DDEFAULT_PLUGINS_SUBDIR="/usr/lib/mariadb/plugin"
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPY_MARIADB_MAJOR_VERSION=1 -DPY_MARIADB_MINOR_VERSION=0 -DPY_MARIADB_PATCH_VERSION=3 -I/usr/include/mysql -I/usr/include/mysql/mysql -I./include -I/usr/local/include/python3.6m -c mariadb/mariadb_connection.c -o build/temp.linux-x86_64-3.6/mariadb/mariadb_connection.o -DDEFAULT_PLUGINS_SUBDIR="/usr/lib/mariadb/plugin"
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPY_MARIADB_MAJOR_VERSION=1 -DPY_MARIADB_MINOR_VERSION=0 -DPY_MARIADB_PATCH_VERSION=3 -I/usr/include/mysql -I/usr/include/mysql/mysql -I./include -I/usr/local/include/python3.6m -c mariadb/mariadb_exception.c -o build/temp.linux-x86_64-3.6/mariadb/mariadb_exception.o -DDEFAULT_PLUGINS_SUBDIR="/usr/lib/mariadb/plugin"
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPY_MARIADB_MAJOR_VERSION=1 -DPY_MARIADB_MINOR_VERSION=0 -DPY_MARIADB_PATCH_VERSION=3 -I/usr/include/mysql -I/usr/include/mysql/mysql -I./include -I/usr/local/include/python3.6m -c mariadb/mariadb_cursor.c -o build/temp.linux-x86_64-3.6/mariadb/mariadb_cursor.o -DDEFAULT_PLUGINS_SUBDIR="/usr/lib/mariadb/plugin"
gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DPY_MARIADB_MAJOR_VERSION=1 -DPY_MARIADB_MINOR_VERSION=0 -DPY_MARIADB_PATCH_VERSION=3 -I/usr/include/mysql -I/usr/include/mysql/mysql -I./include -I/usr/local/include/python3.6m -c mariadb/mariadb_codecs.c -o build/temp.linux-x86_64-3.6/mariadb/mariadb_codecs.o -DDEFAULT_PLUGINS_SUBDIR="/usr/lib/mariadb/plugin"
mariadb/mariadb_codecs.c: In function 'my_strtoull':
mariadb/mariadb_codecs.c:148:15: error: 'ULONG_LONG_MAX' undeclared (first use in this function); did you mean 'ULLONG_MAX'?
148 | if (val > ULONG_LONG_MAX /10 || val*10 > ULONG_LONG_MAX - (*p - '0'))
| ^~~~~~~~~~~~~~
| ULLONG_MAX
mariadb/mariadb_codecs.c:148:15: note: each undeclared identifier is reported only once for each function it appears in
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for mariadb
...
The command '/bin/sh -c pip install mariadb' returned a non-zero code: 1

I filed (and already fixed) that bug in MariaDB's ticket system
The fix will be available in MariaDB Connector/Python 1.0.4 (which will be likely available via pypi.org by end of this week).
As a workaround you can download latest version from github repository and build it manually, or you could try (I didn't test it)
export CFLAGS=-D_GNU_SOURCE=1
pip3 install mariadb

Before installing the mariadb you need to install a few dependencies.
MariaDB version 10.2 -- sudo apt-get install -y libmariadb-dev
MariaDB version 10.3 -- sudo apt-get install -y libmariadb-dev-compat
And rather than going with alpine or python base images, you can use debian:slim base image.

Related

Failed building wheel for trm.pgplot

I am working on Ubuntu 18.04 and Python 3.6 and getting the following error while installing the package:
I am trying normal pip installation:
pip3 install . --user
and getting the following error message:
Processing /home/chinmay/trm-pgplot
Building wheels for collected packages: trm.pgplot
Running setup.py bdist_wheel for trm.pgplot ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools,
tokenize;file='/tmp/pip-b_8sag87-build/setup.py';f=getattr(tokenize,
'open', open)(file);code=f.read().replace('\r\n',
'\n');f.close();exec(compile(code, file, 'exec'))" bdist_wheel -d
/tmp/tmp6x5jsk6hpip-wheel- --python-tag cp36:
running bdist_wheel
running build
running build_py
package init file 'trm/init.py' not found (or not a regular file)
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/trm
creating build/lib.linux-x86_64-3.6/trm/pgplot
copying trm/pgplot/init.py -> build/lib.linux-x86_64-3.6/trm/pgplot
running build_ext
building 'trm.pgplot._pgplot' extension
creating build/temp.linux-x86_64-3.6
creating build/temp.linux-x86_64-3.6/trm
creating build/temp.linux-x86_64-3.6/trm/pgplot
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DMAJOR_VERSION=0 -DMINOR_VERSION=1 -I/home/chinmay/.local/lib/python3.6/site-packages/numpy/core/include -I/usr/local/pgplot/ -I/usr/include/python3.6m -c trm/pgplot/_pgplot.c -o build/temp.linux-x86_64-3.6/trm/pgplot/_pgplot.o
In file included from /home/chinmay/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarraytypes.h:1822:0,
from /home/chinmay/.local/lib/python3.6/site-packages/numpy/core/include/numpy/ndarrayobject.h:12,
from /home/chinmay/.local/lib/python3.6/site-packages/numpy/core/include/numpy/arrayobject.h:4,
from trm/pgplot/_pgplot.c:791:
/home/chinmay/.local/lib/python3.6/site-packages/numpy/core/include/numpy/npy_1_7_deprecated_api.h:17:2:
warning: #warning "Using deprecated NumPy API, disable it with "
"#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
#warning "Using deprecated NumPy API, disable it with "
^~~~~~~
x86_64-linux-gnu-gcc -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/trm/pgplot/_pgplot.o -L/usr/X11R6/lib
-L/opt/local/lib -L/usr/local/pgplot/ -lcpgplot -lpgplot -lX11 -lm -lgfortran -lpng -lz -o build/lib.linux-x86_64-3.6/trm/pgplot/_pgplot.cpython-36m-x86_64-linux-gnu.so
/usr/bin/ld: /usr/local/pgplot//libpgplot.a(xwdriv.o): relocation R_X86_64_PC32 against symbol `stderr##GLIBC_2.2.5' can not
be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
try this: apt-get install libjpeg-dev zlib1g-dev
then try this: pip3 install Pillow (it worked for me)
also try updating the setup tools by: sudo pip install -U setuptools
You need c++ compiler
sudo apt install g++

Missing numpy header while installing sklearn on Alpine Linux

I'm trying to install sklearn on top of a Docker image (FROM astronomerinc/ap-airflow:master-1.10.5-onbuild). Environment coming with the source image:
Alpine Linux v3.10 (kernel 4.9.93-linuxkit-aufs)
Python 3.7.3
numpy==1.17.2
pandas==0.25.1
pandas-gbq==0.11.0
...
I had scipy==1.3.1 in my requirements.txt and had no issues installing it with pip, however when I added scikit-learn to requirements.txt and rebuilt again, I got this error saying a numpy header is missing:
creating build/temp.linux-x86_64-3.7
creating build/temp.linux-x86_64-3.7/sklearn
creating build/temp.linux-x86_64-3.7/sklearn/svm
creating build/temp.linux-x86_64-3.7/sklearn/svm/src
creating build/temp.linux-x86_64-3.7/sklearn/svm/src/libsvm
compile options: '-I/usr/lib/python3.7/site-packages/numpy/core/include -c'
g++: sklearn/svm/src/libsvm/libsvm_template.cpp
ar: adding 1 object files to build/temp.linux-x86_64-3.7/liblibsvm-skl.a
running build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
resetting extension 'sklearn.svm.liblinear' language from 'c' to 'c++'.
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'sklearn.__check_build._check_build' extension
compiling C sources
C compiler: gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -DTHREAD_STACK_SIZE=0x100000 -fPIC
creating build/temp.linux-x86_64-3.7/sklearn/__check_build
compile options: '-I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/include/python3.7m -c'
gcc: sklearn/__check_build/_check_build.c
gcc -shared -Wl,--as-needed -Wl,--as-needed build/temp.linux-x86_64-3.7/sklearn/__check_build/_check_build.o -L/usr/lib -Lbuild/temp.linux-x86_64-3.7 -lpython3.7m -o build/lib.linux-x86_64-3.7/sklearn/__check_build/_check_build.cpython-37m-x86_64-linux-gnu.so
building 'sklearn.cluster._dbscan_inner' extension
compiling C++ sources
C compiler: g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -DTHREAD_STACK_SIZE=0x100000 -fPIC
creating build/temp.linux-x86_64-3.7/sklearn/cluster
compile options: '-I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/include/python3.7m -c'
g++: sklearn/cluster/_dbscan_inner.cpp
sklearn/cluster/_dbscan_inner.cpp:652:10: fatal error: numpy/arrayobject.h: No such file or directory
#include "numpy/arrayobject.h"
^~~~~~~~~~~~~~~~~~~~~
compilation terminated.
error: Command "g++ -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -Os -fomit-frame-pointer -g -DTHREAD_STACK_SIZE=0x100000 -fPIC -I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/lib/python3.7/site-packages/numpy/core/include -I/usr/include/python3.7m -c sklearn/cluster/_dbscan_inner.cpp -o build/temp.linux-x86_64-3.7/sklearn/cluster/_dbscan_inner.o -MMD -MF build/temp.linux-x86_64-3.7/sklearn/cluster/_dbscan_inner.o.d" failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /usr/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-o8ktwf40/scikit-learn/setup.py'"'"'; __file__='"'"'/tmp/pip-install-o8ktwf40/scikit-learn/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-p6ejlhi_/install-record.txt --single-version-externally-managed --compile Check the logs for full command output.
WARNING: You are using pip version 19.2.1, however version 19.2.3 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
The command '/bin/sh -c pip install --no-cache-dir -q -r requirements.txt' returned a non-zero code: 1
Several things I've tried:
upgrading pip
specifying an older version of scikit-learn
"explicitly" installing py3-numpy
None of them worked unfortunately. This post recommends setting the path manually but that just wasn't the answer I was looking for.
Insights? Any help is appreciated!
I suggest you to install py-numpy-dev in your Dockerfile:
RUN apk add py-numpy-dev

mujoco linux package installation error: exit status 1

I am trying to download mujoco which is a package which I require to simulate 3D systems for machine learning but every time I try to install it, I get the following error.
haroon#haroon-HP-ZBook-Studio-G3:~/Desktop/Machine Learning$ pip install mujoco-py
Collecting mujoco-py
Using cached mujoco-py-1.50.1.21.tar.gz
Requirement already satisfied: glfw>=1.4.0 in /usr/local/lib/python3.5/dist-
packages (from mujoco-py)
Requirement already satisfied: numpy>=1.11 in /usr/local/lib/python3.5/dist-
packages (from mujoco-py)
Requirement already satisfied: Cython>=0.25.2 in
/usr/local/lib/python3.5/dist-packages (from mujoco-py)
Requirement already satisfied: imageio>=2.1.2 in
/usr/local/lib/python3.5/dist-packages (from mujoco-py)
Requirement already satisfied: pillow in /usr/lib/python3/dist-packages (from
imageio>=2.1.2->mujoco-py)
Building wheels for collected packages: mujoco-py
Running setup.py bdist_wheel for mujoco-py ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-ku66fh_a/mujoco-
py/setup.py';f=getattr(tokenize, 'open', open)
(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code,
__file__, 'exec'))" bdist_wheel -d /tmp/tmpr7sd9txypip-wheel- --python-tag
cp35:
running bdist_wheel
running build
Compiling /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.pyx because it
changed.
[1/1] Cythonizing /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.pyx
running build_ext
building 'mujoco_py.cymj' extension
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-3.5
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a/mujoco-py
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py
creating /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-
protector-strong -Wformat -Werror=format-security -Wdate-time -
D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py -I/home/haroon/.mujoco/mjpro150/include -
I/usr/local/lib/python3.5/dist-packages/numpy/core/include -
I/usr/include/python3.5m -c /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.c
-o /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.o -fopenmp -w
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-
protector-strong -Wformat -Werror=format-security -Wdate-time -
D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py -I/home/haroon/.mujoco/mjpro150/include -
I/usr/local/lib/python3.5/dist-packages/numpy/core/include -
I/usr/include/python3.5m -c /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/gl/osmesashim.c -o /tmp/pip-build-ku66fh_a/mujoco-
py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-
3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl/osmesashim.o -fopenmp -w
/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl/osmesashim.c:1:23: fatal error:
GL/osmesa.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Failed building wheel for mujoco-py
Running setup.py clean for mujoco-py
Failed to build mujoco-py
Installing collected packages: mujoco-py
Running setup.py install for mujoco-py ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-ku66fh_a/mujoco-py/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-r2plzkky-record/install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_ext
building 'mujoco_py.cymj' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py -I/home/haroon/.mujoco/mjpro150/include -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/include/python3.5m -c /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.c -o /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/cymj.o -fopenmp -w
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -Imujoco_py -I/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py -I/home/haroon/.mujoco/mjpro150/include -I/usr/local/lib/python3.5/dist-packages/numpy/core/include -I/usr/include/python3.5m -c /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl/osmesashim.c -o /tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/generated/_pyxbld_LinuxCPUExtensionBuilder/temp.linux-x86_64-3.5/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl/osmesashim.o -fopenmp -w
/tmp/pip-build-ku66fh_a/mujoco-py/mujoco_py/gl/osmesashim.c:1:23: fatal error: GL/osmesa.h: No such file or directory
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3 -u -c "import setuptools,
tokenize;__file__='/tmp/pip-build-ku66fh_a/mujoco-
py/setup.py';f=getattr(tokenize, 'open', open)
(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code,
__file__, 'exec'))" install --record /tmp/pip-r2plzkky-record/install-
record.txt --single-version-externally-managed --compile" failed with error
code 1 in /tmp/pip-build-ku66fh_a/mujoco-py/
3
Mujoco-py has many dependencies. If you can't use the docker image, you have to install the dependencies yourself. Use sudo apt-get install to install the required libraries.
The current dockerfile lists these dependencies.
sudo apt-get install \
curl \
git \
libgl1-mesa-dev \
libgl1-mesa-glx \
libglew-dev \
libosmesa6-dev \
python3-pip \
python3-numpy \
python3-scipy \
net-tools \
unzip \
vim \
wget \
xpra \
xserver-xorg-dev
You might not need all of these, but it's probably no harm in installing everything. The error message in your question shows that GL/osmesa.h at least is required. That is probably included with one of the mesa packages in the list above.
For CentSO, I fix the problem by running the following commands:
sudo yum install mesa-libOSMesa-devel.x86_64
sudo yum install mesa-libGL-devel.x86_64
sudo yum install mesa-libGLU-devel.x86_64

Error when trying to install pyamg: clang: error: no such file or directory: '“-I/.../boost_1_59_0”'

I am trying to install pyamg in my virtual environment. However, I am getting the following error. I am using mac OS.
c++: pyamg/amg_core/amg_core_wrap.cxx
clang: error: no such file or directory: '“-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0”'
clang: error: no such file or directory: '“-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0”'
error: Command "c++ -fno-strict-aliasing -fno-common -dynamic -arch x86_64 -arch i386 -g -Os -pipe -fno-common -fno-strict-aliasing -fwrapv -DENABLE_DTRACE -DMACOSX -DNDEBUG -Wall -Wshorten-64-to-32 -DNDEBUG -g -fwrapv -Os -Wall -Wstrict-prototypes -DENABLE_DTRACE “-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0” -arch x86_64 -arch i386 -pipe -D__STDC_FORMAT_MACROS=1 -I/Users/mas/PycharmProjects/Whale/Zahraa5/lib/python2.7/site-packages/numpy/core/include -I/System/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7 -c pyamg/amg_core/amg_core_wrap.cxx -o build/temp.macosx-10.10-intel-2.7/pyamg/amg_core/amg_core_wrap.o" failed with exit status 1
Use Anaconda or Miniconda
conda install pyamg
takes only a few seconds.
You can create an environment with:
conda create --name my_env python=2.7
Change into it:
source activate my_env
and install pyamg:
conda install pyamg
You can still use pip to install packages conda cannot find.
Life is too short to spent time on compilation issues. ;)
I'm pretty sure that the -I at the beginning of the paths is what's screwing everything up. I bet if you tried adding those export statements back to .bashrc but took out the -I and -L prefixes then your original command would start working.
Perhaps you had those there for a reason, I've never seen that, but removing those prefixes would probably work.
Actually, I think #oarfish called it correctly in the comments. The problem seems to be the funny “ and ” characters in those paths, which are distinct from the usual " double quote symbol.
The following reproduces the error for me:
~$ CPPFLAGS=“-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0” \
LIBS=“-L/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0/stage/lib” \
pip install pyamg
Collecting pyamg
Downloading pyamg-3.0.1.tar.gz (759kB)
100% |████████████████████████████████| 761kB 33.2MB/s
Installing collected packages: pyamg
Running setup.py install for pyamg
...
creating build/temp.linux-x86_64-2.7
creating build/temp.linux-x86_64-2.7/pyamg
creating build/temp.linux-x86_64-2.7/pyamg/amg_core
compile options: '-D__STDC_FORMAT_MACROS=1 -I/home/alistair/.venvs/pyamg/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c'
c++: pyamg/amg_core/amg_core_wrap.cxx
g++: error: “-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0”: No such file or directory
g++: error: “-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0”: No such file or directory
error: Command "c++ -pthread -DNDEBUG -g -fwrapv -O2 -Wall -fno-strict-aliasing -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security “-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0” -fPIC -D__STDC_FO
RMAT_MACROS=1 -I/home/alistair/.venvs/pyamg/local/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c pyamg/amg_core/amg_core_wrap.cxx -o build/temp.linux-x86_64-2.7/pyamg/amg_core/amg_core_wrap.o" failed with exit
status 1
----------------------------------------
Command "/home/alistair/.venvs/pyamg/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-Cl5_2g/pyamg/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" insta
ll --record /tmp/pip-kkjcoa-record/install-record.txt --single-version-externally-managed --compile --install-headers /home/alistair/.venvs/pyamg/include/site/python2.7/pyamg" failed with error code 1 in /tmp/pip-build-Cl5_2g/pyamg
Whereas with " symbols the installation succeeds:
~$ CPPFLAGS="-I/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0" \
LIBS="-L/Users/mas/PycharmProjects/kaggle-ndsb/boost_1_59_0/stage/lib" \
pip install pyamg
Collecting pyamg
Using cached pyamg-3.0.1.tar.gz
Installing collected packages: pyamg
Running setup.py install for pyamg
Successfully installed pyamg-3.0.1
The paths themselves are irrelevant - the compilation succeeds in spite of the fact that those directories don't actually exist on my machine.

python.h not found when trying to install gevent-socketio

here is my error when I try to install gevent-socketio
Installing collected packages: gevent, greenlet
Running setup.py install for gevent
building 'gevent.core' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes >-fPIC -DLIBEV_EMBED=1 -DEV_COMMON= -DEV_CHECK_ENABLE=0 -DEV_CLEANUP_ENABLE=0 >-DEV_EMBED_ENABLE=0 -DEV_PERIODIC_ENABLE=0 -Ibuild/temp.linux-x86_64-2.7/libev -Ilibev >-I/usr/include/python2.7 -c gevent/gevent.core.c -o build/temp.linux-x86_64-2.7/gevent/gevent.core.o
gevent/gevent.core.c:17:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
Complete output from command /usr/bin/python -c "import setuptools;file='/var/www/bleu/build/gevent/setup.py';exec(compile(open(file).read().replace('\r\n', '\n'), file, 'exec'))" install --single-version-externally-managed --record /tmp/pip-_kv6Fy-record/install-record.txt:
running install
running build
running build_py
running build_ext
building 'gevent.core' extension
gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC > -DLIBEV_EMBED=1 -DEV_COMMON= -DEV_CHECK_ENABLE=0 -DEV_CLEANUP_ENABLE=0 -DEV_EMBED_ENABLE=0 -DEV_PERIODIC_ENABLE=0 -Ibuild/temp.linux-x86_64-2.7/libev -Ilibev -I/usr/include/python2.7 -c gevent/gevent.core.c -o build/temp.linux-x86_64-2.7/gevent/gevent.core.o
gevent/gevent.core.c:17:20: fatal error: Python.h: No such file or directory
compilation terminated.
error: command 'gcc' failed with exit status 1
Have you an idea how i can fix this?
Install the Development Package(s):
CentOS/RHEL::
yum install python-devel
Debian/Ubuntu:
apt-get install python-dev

Categories