Can't install Proj 8.0.0 for cartopy linux - python

I am trying to install Cartopy on Ubuntu and need to install proj v8.0.0 binaries for Cartopy. However when I try to apt-get install proj-bin I can only get proj v6.3.1. How do I install the latest (or at least v8.0.0) proj for cartopy?

I'm answering my own question here partly to help others with this problem, and partly as an archive for myself so I know how to fix this issue if I come across it again. I spent quite a while trying to figure it out, and wrote detailed instructions, so see below:
Installing cartopy is a huge pain, and I've found using conda to be a very bad idea (it has bricked itself and python along with it multiple times for me)
THIS INSTALLATION IS FOR LINUX.
Step 0. Update apt:
apt update
Step 1. Install GEOS:
Run the following command to install GEOS:
apt-get install libgeos-dev
In case that doesn't do it, install all files with this:
apt-get install libgeos-dev libgeos++-dev libgeos-3.8.0 libgeos-c1v5 libgeos-doc
Step 2. Install proj dependencies:
Install cmake:
apt install cmake
Install sqlite3:
apt install sqlite3
Install curl devlopment package:
apt install curl && apt-get install libcurl4-openssl-dev
Step 3. Install Proj
Trying apt-get just in case it works:
Unfortunately, cartopy requires proj v8.0.0 as a minimum, but if you install proj using apt you can only install proj v6.3.1
Just for reference in case anything changes, this is the command to install proj from apt:
apt-get install proj-bin
I'm fairly sure this is all you need, but in case it's not, this command will install the remaining proj files:
apt-get install proj-bin libproj-dev proj-data
To remove the above installation, run:
apt-get remove proj-bin
or:
apt-get remove proj-bin libproj-dev proj-data
Building Proj from source
So if the above commands don't work (it's not working as of 2022/4/8), then follow the below instructions to install proj from source:
Go to your install folder and download proj-9.0.0 (or any version with proj-x.x.x.tar.gz):
wget https://download.osgeo.org/proj/proj-9.0.0.tar.gz
Extract the tar.gz file:
tar -xf proj-9.0.0.tar.gz
cd into the folder:
cd proj-9.0.0
Make a build folder and cd into it:
mkdir build && cd build
Run (this may take a while):
cmake ..
cmake --build .
cmake --build . --target install
Run to make sure everything installed correctly:
ctest
The test command failed on one test for me (19 - nkg), but otherwise was fine.
You should find the required files in the ./bin directory
Finally:
Move binaries to the /bin directory:
cp ./bin/* /bin
As per Justino, you may also need to move the libraries:
cp ./lib/* /lib
Now after all this, you can finally install cartopy with pip:
pip install cartopy
After doing this, my cartopy still wasn't working. I went home to work on this next week, came back, and all of a sudden it was working so maybe try restarting

The libraries should be copied manually
sudo cp ./lib/* /lib
This works for me

Related

I rm python3.9 can i get it back ,

I am new on linux and i tried to change the Symbolic link of python3 in /usr/bin/ ,
and i accidentally remove the python3.9 file !
But i know i didn't delete it completely Because there is still a lot of file called python3.9 .
After that 'apt' didn't work anymore and i got this error :
E: Problem executing scripts APT::Update::Post-Invoke-Success 'if /usr/bin/test -w /var/lib/command-not-found/ -a -e /usr/lib/cnf-update-db; then /usr/lib/cnf-update-db > /dev/null; fi'
Now , I only have version 2.7 of python and i can't install another because apt don't work !!
This is the result of
:
So I hope someone can help me and I wish you a good afternoon
Edit: As you've lost apt command meaning you can't install or remove anything using apt command.
The possible way to fix this is by reinstalling respective apt-package of your architecture and then do the below python installation.
To install apt-package again download the .deb file from the “/etc/apt/sources.list” file. Lots of links for installation and upgrades for packages will be in this file.
Now find the downloading source using $cat /etc/apt/sources.list command.
Find /pool/main/a/apt/ directory under downloading source and then download the .deb file which matches your architecture and download it.
Thereafter install this using dpkg command like this
sudo dpkg -i PackageName.deb
Replace the PackageName with your file name (e.g- apt_1.6.13_arm64/apt_1.9.3_i386).
Restart the PC and then check the /usr/bin/ directory to ensure if it had properly installed .
If you get nothing there then run locate apt-get command to locate it. If you can't get it then there is no other way than reinstalling the OS itself.
If you have reinstalled apt then
use the following commands to freshly install Python.
Note that all the commands below will be for Python3 as you're concerned with version 3.9.
# To uninstall the Python only
sudo apt-get remove python3.9
# To uninstall the Python with all the packages also
sudo apt-get remove --auto-remove python3.9
# To remove all the dependencies and configuration files
sudo apt-get purge --auto-remove python3.9
Now to install the Python3. The following command will install the latest version of python3. which to this date is python3.9.
sudo apt-get install python3
You can use pip to manage the python packages also.
To install pip use the following command
sudo apt install python3-pip
Now to manage the python packages using pip
# To install package; replace PackageName with the name of package(like flask)
sudo pip install PackageName
# To uninstall package
sudo pip uninstall PackageName
If you get trouble with pip get list of all the commands pip uses with
sudo pip help
You can list all the Python version installed (on default location)
ls /usr/bin/python*
Hope this will help in resolving the problem.

ImportError: libGL.so.1: cannot open shared object file: No such file or directory

I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this on Docker, and I am not able to check the OpenCV version. I tried importing matplotlib and that imports fine.
Add the following lines to your Dockerfile:
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
These commands install the cv2 dependencies that are normally present on the local machine, but might be missing in your Docker container causing the issue.
[minor update on 20 Jan 2022: as Docker recommends, never put RUN apt-get update alone, causing cache issue]
Even though the above solutions work. But their package sizes are quite big.
libGL.so.1 is provided by package libgl1. So the following code is sufficient.
apt-get update && apt-get install libgl1
This is a little bit better solution in my opinion. Package python3-opencv includes all system dependencies of OpenCV.
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install opencv-python
Try installing opencv-python-headless python dependency instead of opencv-python. That includes a precompiled binary wheel with no external dependencies (other than numpy), and is intended for headless environments like Docker. This saved almost 700mb in my docker image compared with using the python3-opencv Debian package (with all its dependencies).
The package documentation discusses this and the related (more expansive) opencv-contrib-python-headless pypi package.
Example reproducing the ImportError in the question
# docker run -it python:3.9-slim bash -c "pip -q install opencv-python; python -c 'import cv2'"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
# docker run -it python:3.9-slim bash -c "pip -q install opencv-python-headless; python -c 'import cv2'"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
For me, the only WA that worked is following:
# These are for libGL.so issues
# RUN apt-get update
# RUN apt install libgl1-mesa-glx
# RUN apt-get install -y python3-opencv
# RUN pip3 install opencv-python
RUN pip3 install opencv-python-headless==4.5.3.56
If you're on CentOS, RHEL, Fedora, or other linux distros that use yum, you'll want:
sudo yum install mesa-libGL -y
In my case it was enough to do the following which also saves space in comparison to above solutions
RUN apt-get update && apt-get install -y --no-install-recommends \
libgl1 \
libglib2.0-0 \
Put this in the Dockerfile
RUN apt-get update
RUN apt install -y libgl1-mesa-glx
Before the line
COPY requirements.txt requirements.txt
For example
......
RUN apt-get update
RUN apt install -y libgl1-mesa-glx
COPY requirements.txt requirements.txt
......
I was getting the same error when I was trying to use OpenCV in the GCP Appengine Flex server environment. Replacing "opencv-python" by "opencv-python-headless" in the requirements.txt solved the problem.
The OpenCV documentation talks about different packages for desktop vs. Server (headless) environments.
I met this problem while using cv2 in a docker container. I fixed it by:
pip install opencv-contrib-python
install opencv-contrib-python rather than opencv-python.
Here is the solution you need:
pip install -U opencv-python
apt update && apt install -y libsm6 libxext6 ffmpeg libfontconfig1 libxrender1 libgl1-mesa-glx
had the same issue on centos 8 after using pip3 install opencv on a non gui server which is lacking all sorts of graphics libraries.
dnf install opencv
pulls in all needed dependencies.
"installing opencv-python-headless instead of opencv-python"
this works in my case!
I was deploying my website to Azure and pop up this exception:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
then I uninstall the opencv-python package, install the later one,
freeze the requirements and then deploy it again,
then problem solved.
For a raspberry pi, put this , work for me :
sudo apt-get install ffmpeg libsm6 libxext6 -y
For me, the problem was related to proxy setting. For pypi, I was using nexus mirror to pypi, for opencv nothing worked. Until I connected to a different network.
In rocky linux 9 i resolved the error using command
dnf install mesa-libGLU
Use opencv-python-headless if you're using docker or in server environment.
I got the same issue on Ubuntu desktop, and none of the other solutions worked for me.
libGL.so.1 was correctly installed but for some reason Python wasn’t able to see it:
$ ldconfig -p | grep libGL.so.1
libGL.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libGL.so.1
The only solution that worked was to force it in LD_LIBRARY_PATH. Add the following in your ~/.bashrc then run source ~/.bashrc or restart your shell:
export LD_LIBRARY_PATH="/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
I understand that LD_LIBRARY_PATH is bad but for me this is the only solution that works.

Bazel cross compile of tensorflow for ARM fails

I am trying to build tensorflow to run on a Zynq, specifically, the Z7020. I have petalinux running on the board, and python 3.4.9. When trying to build tensorflow following the instructions found here:[https://www.tensorflow.org/install/install_raspbian#cross-compiling_from_sources]
Note that both petalinux and raspbian are both Debian derivatives and the Z7020 has the same CortexA9 cores as the raspberry-pi 0 and 1 series boards.
I am trying to build on an Ubuntu 16.04 host. The command I am using to build is:
sudo CI_DOCKER_EXTRA_PARAMS="-e CI_BUILD_PYTHON=python3 -e CROSSTOOL_PYTHON_INCLUDE=/home/rklein/Python-3.4.9/Include" tensorflow/tools/ci_build/ci_build.sh PI-PYTHON3 tensorflow/tools/ci_build/pi/build_raspberry_pi.sh PI_ONE
Bazel churns for about 2 hours and comes back with the following error message:
/home/rklein/tensorflow/bazel-ci_build-cache/.cache/bazel/_bazel_root/eab0--lots of hex digits--85e8/external/arm_compiler/bin/arm-linux-gnueablhf-gcc --lots of options
In file included from /usr/include/python2.7/Python.h:8:0, from ./tensorflow/python/lib/core/bfloat16.h:19,
from tensorflow/python/lib/core/bfloat16.h:18:
from /usr/include/python2.7/pyconfig.h:13:54:
fatal error: arm-linux-gnueabihf/python2.7/pyconfig.h: No such file or directory
#include <arm-linux-gnueabihf/python2.7/pyconfig.h>
^
compilation terminated.
What settings are needed to tell Bazel to use python3? Note that there is no /usr/include/python2.7 directory on the host machine, so I suspect that Basel is doing some voodoo behind the scenes. The command
find ~ -name python2.7
comes up empty.
I have tried to read up as much as I can on Bazel, but the documentation seems pretty lean - any good references would be appreciated.
I can't help you with your error message (or Bazel altogether). However I installed TensorFlow on an Xilinx Zynq Ultrascale+ with a Petalinux kernel and an Ubuntu (arm64) root filesystem. It's not the same exact chip (but the installation process should be similar). I didn't build TensorFlow myself, instead I used the packages provided by the tensorflow-on-arm project. Maybe my experience will be useful for other people to get TensorFlow running:
You need a working OS (Xilinx has documentation for that). Depending on your chip you need either a 32 (armhf) or 64 Bit (arm64) rootfs. I used an Ubuntu rootfs, so I could use apt-install.
You need to install some dependencies. I followed the instructions from the tensorflow-on-arm project.
apt-get install openjdk-8-jdk automake autoconf curl zip unzip libtool swig libpng12-dev zlib1g-dev pkg-config git g++ wget xz-utils
You also need Python (be sure to install Python v3.5 - not Python v3.6, etc.).
apt-get install python3-numpy python3-dev python3-pip python3-mock
I also needed to install two not listed packages.
apt-get install cython3 libhdf5-dev
Install some pip3 packages (you might want to install those in a virtual-environment and also update pip3).
pip3 install -U --user keras_applications==1.0.5 --no-deps
pip3 install -U --user keras_preprocessing==1.0.3 --no-deps
pip3 install -U --user numpy grpcio h5py
Now you should download the TensorFlow pip package. The different packages are listed under Releases. I chose TensorFlow v.1.12 for Python v3.5 and arm64 / aarch64.
wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.12.0/tensorflow-1.12.0-cp35-none-linux_aarch64.whl
Now you can install the package with pip3.
pip3 install -U --user tensorflow-1.12.0*
I hope it worked for you!

Installing XGBoost

I am trying use the XGBoost package, but I am having trouble installing it. I am following the installation guide
here
https://xgboost.readthedocs.io/en/latest/build.html#python-package-installation. I have successfully built xgboost for OSX using
git clone --recursive https://github.com/dmlc/xgboost
cd xgboost; cp make/minimum.mk ./config.mk; make -j4
However, when I try to install the python package in my terminal using this code
cd python-package; sudo python setup.py install
I get the error python: command not found. I am not sure why I get this error because I have python installed and I can run ipython notebooks. Python is install here on my computer /usr/local/Cellar/python/2.7.8/Frameworks/Python.framework/Versions/2.7/lib/python2.7. Do I need to add a path in my bash_profile to access it? I don't understand why I can't use python from the command line.
I have answered similar issue in this question. You can install xgboost library along with other essential libraries as follows(please choose based on the libraries sufficient for your project), my main focus in this answer is to make it helpful in setting up for most data science projects requiring sklearn, pandas, scipy and xgboost algorithms along with visualization libraries.
# installing essentials
apt-get update; \
apt-get install -y \
python python-pip \
build-essential \
python-dev \
python-setuptools \
python-matplotlib \
libatlas-dev \
curl \
libatlas3gf-base && \
apt-get clean
# upgrading pip
curl -O https://bootstrap.pypa.io/get-pip.py && \
python get-pip.py && \
rm get-pip.py
# installing libraries
pip install numpy==1.13.1
pip install scipy
pip install -U scikit-learn
pip install seaborn
pip install --pre xgboost
If you're still having environment issues I would suggest using this Dockerfile. You might also find Datmo conversion useful to facilitate this.
DISCLAIMER: I work at this company called Datmo, which is building a community of developers by simplifying the machine learning workflow.
If you have python in your /usr/bin/ directory, all you need to do is to add that directory to your path.
Add this line to your .bash_profile and restart your shell.
export PATH="$PATH:/usr/bin"
Then you should be able to use any of the python versions in your /usr/bin directory. python, python3 etc. Hope this helps.

How do I use python-openbabel in Travis CI?

I use Travis CI as part of a Toxicology mapping project. For this project I require python-openbabel as a dependency. As such, I have added the apt-get installer to the .travis.yml file, shown below ( comments removed ).
language: python
python:
- "2.7"
before_install:
- sudo apt-get update -qq
- sudo apt-get install python-openbabel
install: "pip install -r requirements.txt"
script: nosetests tox.py
However, all these attempts failed with the error message Error: SWIG failed. Is Open Babel installed?. I have tried adding SWIG to the list of applications to be installed, to no avail.
Additionally, I have attempted to add the entire build process as proposed by Openbabel itself, this yields the following travis.yml:
language: python
python:
- "2.7"
before_install:
- sudo apt-get update -qq
- sudo apt-get install python-openbabel
- wget http://downloads.sourceforge.net/project/openbabel/openbabel/2.3.1/openbabel-2.3.1.tar.gz?r=http://%3A%2F%2Fsourceforge.net%2Fprojects%2Fopenbabel%2Fopenbabel%2F2.3.1%2Fts=1393727248&use_mirror=switch
- tar zxf openbabel-2.3.1.tar.gz
- mkdir build
- cd build
- cmake ../openbabel-2.3.1 -DPYTHON_BINDINGS=ON
- make
- make install
- export PYTHONPATH=/usr/local/lib:$PYTHONPATH
install: "pip install -r requirements.txt"
script: nosetests tox.py
This fails when trying to untar the downloaded file.
All the failed builds can be seen on Travis-CI: https://travis-ci.org/ToxProject/ToxProject
The Github repo is here: https://github.com/ToxProject/ToxProject
In short, how do I get python-openbabel working with Travis-CI?
The version of openbabel installed via apt-get is 1.7 while the version specified in setup.py in requirements.txt is openbabel>=1.8.
This make makes the package installed by apt-get not satisfy the requirements.txt and pip is trying install it regardless the installed old version of openbabel. And virtualenv doesn't use the already installed system packages.
And when install openbabel via pip, it needs the header files of libopenbabel which is not included in libopenbabel4 which is automatically installed by python-openbabel The version of libopenbabel-dev in ubuntu 12.04 used by travisCI doesn't satisfy the needs of openbabel==1.8.
Solution:
install newer version of libopenbabel-dev and libopenbabel4 manually:
before_install:
- sudo apt-get install -qq -y swig python-dev
- wget http://mirrors.kernel.org/ubuntu/pool/universe/o/openbabel/libopenbabel4_2.3.2+dfsg-1.1_amd64.deb
- sudo dpkg -i libopenbabel4_2.3.2+dfsg-1.1_amd64.deb
- wget http://mirrors.kernel.org/ubuntu/pool/universe/o/openbabel/libopenbabel-dev_2.3.2+dfsg-1.1_amd64.deb
- sudo dpkg -i libopenbabel-dev_2.3.2+dfsg-1.1_amd64.deb
I see that now the build fails at the pip install requirements stage. Travis creates a virtual environment for running python. By default, python packages installed on the system (ie via apt-get) will not be available, unless you add this to your travils.yml:
virtualenv:
system_site_packages: true
I had the same problem with python-qt4 and python-qgis, here is a travis.yml file I used recently: https://github.com/anitagraser/TimeManager/blob/master/.travis.yml

Categories