Opencv not being installed in Raspbian Stretch - python

I am installing opencv in Raspbian stretch using Raspberry Pi 3, following the tutorial given by Dr. Adrian Rosebrock as mentioned in the code section.
sudo apt-get update && sudo apt-get upgrade
sudo apt-get install build-essential cmake pkg-config
sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-
dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
libv4l-dev
sudo apt-get install libxvidcore-dev libx264-dev
sudo apt-get install libgtk2.0-dev libgtk-3-dev
sudo apt-get install libatlas-base-dev gfortran
sudo apt-get install python2.7-dev python3-dev
cd ~
wget -O opencv.zip
https://github.com/Itseez/opencv/archive/3.3.0.zip
unzip opencv.zip
wget -O opencv_contrib.zip
https://github.com/Itseez/opencv_contrib/archive/3.3.0.zip
unzip opencv_contrib.zip
wget https://bootstrap.pypa.io/get-pip.py
sudo python get-pip.py
sudo python3 get-pip.py
sudo pip install virtualenv virtualenvwrapper
sudo rm -rf ~/.cache/pip
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
echo -e "\n# virtualenv and virtualenvwrapper" >> ~/.profile
echo "export WORKON_HOME=$HOME/.virtualenvs" >> ~/.profile
echo "export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3" >> ~/.profile
echo "source /usr/local/bin/virtualenvwrapper.sh" >> ~/.profile
mkvirtualenv cv -p python2
mkvirtualenv cv -p python3
pip install numpy
workon cv
cd ~/opencv-3.3.0/
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.3.0/modules \
-D BUILD_EXAMPLES=ON ..
CONF_SWAPSIZE=1024
sudo /etc/init.d/dphys-swapfile stop
sudo /etc/init.d/dphys-swapfile start
make -j4
sudo make install
sudo ldconfig
ls -l /usr/local/lib/python2.7/site-packages/
According to the tutorial I should find opencv + python findings after this list command but I am not finding it either in site-packages or in dist packages. Also when I use import cv or cv2 command in my program it says no module of this name found, suggesting that opencv is not installed. Considering that all the steps of the above code were executed without any error why is this happening. Kindly guide.

Related

How to set default python3 to py3.8 in the Dockerfile?

I tried to alias python3 to python3.8 in the Dockerfile. But It doesn't work for me. I am using ubuntu:18.04.
Step 25/41 : RUN apt-get update && apt-get install -y python3.8
---> Using cache
---> 9fa81ca14a53
Step 26/41 : RUN alias python3="python3.8" && python3 --version
---> Running in d7232d3c8b8f
Python 3.6.9
As you can see the python3 is still 3.6.9. How can I fix this issue?
Thanks.
EDIT
Just attach my Dockerfile:
##################################################################################################################
# Build
#################################################################################################################
#FROM openjdk:8
FROM ubuntu:18.04
############## Linux and perl packages ###############
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer && \
apt-get update -y && \
apt-get install curl groff python-gdbm -y;
# Fix certificate issues, found as of
# https://bugs.launchpad.net/ubuntu/+source/ca-certificates-java/+bug/983302
RUN apt-get update && \
apt-get install -y ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/oracle-jdk8-installer;
# Setup JAVA_HOME, this is useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
# install git
RUN apt-get update && \
apt-get install -y mysql-server && \
apt-get install -y uuid-runtime git jq python python-dev python-pip python-virtualenv libdbd-mysql-perl && \
rm -rf /var/lib/apt/lists/* && \
apt-get install perl && \
perl -MCPAN -e 'CPAN::Shell->install("Inline")' && \
perl -MCPAN -e 'CPAN::Shell->install("DBI")' && \
perl -MCPAN -e 'CPAN::Shell->install("List::MoreUtils")' && \
perl -MCPAN -e 'CPAN::Shell->install("Inline::Python")' && \
perl -MCPAN -e 'CPAN::Shell->install("LWP::Simple")' && \
perl -MCPAN -e 'CPAN::Shell->install("JSON")' && \
perl -MCPAN -e 'CPAN::Shell->install("LWP::Protocol::https")';
RUN apt-get update && \
apt-get install --yes cpanminus
RUN cpanm \
CPAN::Meta \
YAML \
DBI \
Digest::SHA \
Module::Build \
Test::Most \
Test::Weaken \
Test::Memory::Cycle \
Clone
# Install perl modules for network and SSL (and their dependencies)
RUN apt-get install --yes \
openssl \
libssl-dev \
liblwp-protocol-https-perl
RUN cpanm \
LWP \
LWP::Protocol::https
# New module for v1.2 annotation
RUN perl -MCPAN -e 'CPAN::Shell->install("Text::NSP::Measures::2D::Fisher::twotailed")'
#############################################
############## python packages ###############
# python packages
RUN pip install pymysql==0.10.1 awscli boto3 pandas docopt fastnumbers tqdm pygr
############## python3 packages ###############
# python3 packages
RUN apt-get update && \
apt-get install -y python3-pip && \
python3 -m pip install numpy && \
python3 -m pip install pandas && \
python3 -m pip install sqlalchemy && \
python3 -m pip install boto3 && \
python3 -m pip install pymysql && \
python3 -m pip install pymongo;
RUN python3 -m pip install pyfaidx
#############################################
#############################################
############# expose tcp ports
EXPOSE 3306/tcp
EXPOSE 80/tcp
EXPOSE 8080
############# RUN entrypoint.sh
# commented out for testing
ENTRYPOINT ["./entrypoint.sh"]
© 2022 GitHub, Inc.
Terms
When I install the package pyfaidx with default python3.6, it raises an error. I found that python3.8 can install it. Thus, I want to switch to python3.8 to install all py3 packages.
Bash alias that you define in your RUN statement will be available only in the current shell session. When the current RUN statement finishes executing, you exit the session, effectively forgetting any aliases you set up there.
See also: How can I set Bash aliases for docker containers in Dockerfile?
Another option is to use update-alternatives, e.g.,
# update-alternatives --install `which python3` python3 `which python3.8` 20
update-alternatives: using /usr/bin/python3.8 to provide /usr/bin/python3 (python3) in auto mode
# python3 --version
Python 3.8.0
This may interfere with other container packages that do require 3.6 which was default on Ubuntu 18.04 back in the day. Furthermore, pip's authors do not recommend using pip to install system-wide packages like that. In fact, newer pip versions will emit a warning when attempting to use pip globally along the lines of your Dockerfile.
Therefore a better course of action is using a virtualenv:
# apt install -y python3-venv python3.8-venv
...
# python3.8 -m venv /usr/local/venv
# /usr/local/venv/bin/pip install -U pip setuptools wheel
# /usr/local/venv/bin/pip install -U pyfaidx
... (etc)
You can also "enter" your virtualenv by activating it:
root#a1d0210118a8:/# source /usr/local/venv/bin/activate
(venv) root#a1d0210118a8:/# python -V
Python 3.8.0
See also: Use different Python version with virtualenv.

How to install Python2.7.5 in Ubuntu docker image?

I have specific requirement to install Python 2.7.5 in Ubuntu, I could install 2.7.18 without any issues
Below is my dockerfile
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update -y \
&& apt-get install -y python2.7.x \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["python"]
however if I set it to python2.7.5
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update -y \
&& apt-get install -y python2.7.5 \
&& rm -rf /var/lib/apt/lists/*
ENTRYPOINT ["python"]
it is throwing the following error
E: Couldn't find any package by regex 'python2.7.5'
I want to install Python 2.7.5 along with relevant PIP, what should I do?
This version is no longer available in canonical mirrors.
It has been released in 2013.
As a result, having both python and pip working together since then is challenging.
Python 2.7.5 + PIP on centos7
It may be the simplest way if ubuntu is not a requirement.
ARG CENTOS_VERSION=7
FROM centos:$CENTOS_VERSION
# Python 2.7.5 is installed with centos7 image
# Add repository for PIP
RUN yum install -y epel-release
# Install pip
RUN yum install -y python-pip
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.5 on ubuntu
I've been able to install it from source
It has not been a success to install pip :
https://bootstrap.pypa.io/pip/2.7/get-pip.py
ARG UBUNTU_VERSION=18.04
FROM ubuntu:$UBUNTU_VERSION
ARG PYTHON_VERSION=2.7.5
# Install dependencies
# PIP - openssl version > 1.1 may be an issue (try older ubuntu images)
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build Python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.6 + pip on ubuntu
Ubuntu 14.04 still has mirrors working (how long ???).
Python packages are really close to your expectations.
You may try to run your scripts with that one.
ARG UBUNTU_VERSION=14.04
FROM ubuntu:$UBUNTU_VERSION
RUN apt-get update \
&& apt-get install -y python python-pip \
&& apt-get clean
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.5 + pip, not working but could on ubuntu
Here is what I've tried with no success.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:$UBUNTU_VERSION
# Install dependencies
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
# Build pip from source
RUN wget https://bootstrap.pypa.io/pip/2.7/get-pip.py \
&& python get-pip.py
RUN python --version
ENTRYPOINT [ "python" ]
Python 2.7.9 with pip - as requested in comment
You can use this dockerfile, building python includes pip.
ARG UBUNTU_VERSION=16.04
FROM ubuntu:$UBUNTU_VERSION
ARG PYTHON_VERSION=2.7.9
# Install dependencies
RUN apt-get update \
&& apt-get install -y wget gcc make openssl libffi-dev libgdbm-dev libsqlite3-dev libssl-dev zlib1g-dev \
&& apt-get clean
WORKDIR /tmp/
# Build Python from source
RUN wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz \
&& tar --extract -f Python-$PYTHON_VERSION.tgz \
&& cd ./Python-$PYTHON_VERSION/ \
&& ./configure --with-ensurepip=install --enable-optimizations --prefix=/usr/local \
&& make && make install \
&& cd ../ \
&& rm -r ./Python-$PYTHON_VERSION*
RUN python --version \
&& pip --version
ENTRYPOINT [ "python" ]
The simplest possible solution:
sudo apt-get install libssl-dev openssl
wget https://www.python.org/ftp/python/2.7.5/Python-2.7.5.tgz
tar xzvf Python-2.7.5.tgz
cd Python-2.7.5
./configure
make
sudo make install
After installation completed set installed python as default one.

Docker how to make python 3.8 as default

I'm trying to update an existing Dockerfile to switch from python3.5 to python3.8, previously it was creating a symlink for python3.5 and pip3 like this:
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3 /usr/bin/python
I've updated the Dockerfile to install python3.8 from deadsnakes:ppa
apt-get install python3-pip python3.8-dev python3.8-distutils python3.8-venv
if I remove python3-pip, it complains about gcc
C compiler or Python headers are not installed on this system. Try to run: sudo apt-get install gcc python3-dev
with these installations in place I'm trying to update existing symlink creation something like this:
RUN ln -s /usr/bin/pip3 /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/pip /usr/local/lib/python3.8/dist-packages/pip
RUN ln -s /usr/bin/python3.8 /usr/bin/python3
it fails, saying
ln: failed to create symbolic link '/usr/bin/python3': File exists
which I assume fails because python3 points to python3.6.
if I try: RUN ln -s /usr/bin/python3.8 /usr/bin/python it doesn't complain about symlink and image gets build successfully, but fails while installing requirements later (we use Makefile targets to install dependencies inside the container using pip and pip-sync):
ERROR: Cannot uninstall 'python-apt'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
which I assume because python-apt gets installed as part of the default python3.6 installation and python3.8 pip can't uninstall it.
PS: my Dockerfile image is based on Ubunut 18.04 which comes with python3.6 as default.
How can I properly switch Dockerfile / image from python3.5 to python3.8? so I can later use pip directly and it points to python3.8's pip
Replacing the system python in this way is usually not a good idea (as it can break operating-system-level programs which depend on those executables) -- I go over that a little bit in this video I made "why not global pip / virtualenv?"
A better way is to create a prefix and put that on the PATH earlier (this allows system executables to continue to work, but bare python / python3 / etc. will use your other executable)
in the case of deadsnakes which it seems like you're using, something like this should work:
FROM ubuntu:bionic
RUN : \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
software-properties-common \
&& add-apt-repository -y ppa:deadsnakes \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
python3.8-venv \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& :
RUN python3.8 -m venv /venv
ENV PATH=/venv/bin:$PATH
the ENV line is the key here, that puts the virtualenv on the beginning of the path
$ docker build -t test .
...
$ docker run --rm -ti test bash -c 'which python && python --version && which pip && pip --version'
/venv/bin/python
Python 3.8.5
/venv/bin/pip
pip 20.1.1 from /venv/lib/python3.8/site-packages/pip (python 3.8)
disclaimer: I'm the maintainer of deadsnakes
Why not just build a new image from ubuntu:18.04 with the desired config you need?
Like this:
FROM ubuntu:18.04
RUN apt update && apt install software-properties-common -y
RUN add-apt-repository ppa:deadsnakes/ppa && install python3.8 -y
RUN ln -s /usr/bin/pip3 /usr/bin/pip && \
ln -s /usr/bin/python3.8 /usr/bin/python
You can install and enable your python version.
# Python 3.8 and pip3
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa -y
RUN apt-get install -y python3.8
RUN ln -s /usr/bin/python3.8 /usr/bin/python
RUN apt-get install -y python3-pip
Sometimes, modifying the OS (like getting new Ubuntu clean os) is not favorable, because the current OS is too complicated. For example, my base OS is FROM ufoym/deepo:all-cu101.
So, to modify the existing python (3.6) to python 3.8, I added these 2 lines:
RUN apt-get update -qq && apt-get install -y -qq python3.8
RUN rm /usr/bin/python && rm /usr/bin/python3 && ln -s /usr/bin/python3.8 /usr/bin/python && ln -s /usr/bin/python3.8 /usr/bin/python3 \
&& rm /usr/local/bin/python && rm /usr/local/bin/python3 && ln -s /usr/bin/python3.8 /usr/local/bin/python && ln -s /usr/bin/python3.8 /usr/local/bin/python3 \
&& apt-get install -y python3-pip python-dev python3.8-dev && python3 -m pip install pip --upgrade
The first step is to install the python3.8;
The second step is to modify the softlink of python and python3 to point to python3.8
After that, install python3-pip, and update it to make sure the pip is using the current python 3.8 environment.

Building Python3.7.3 from source missing '_ctypes'

I'm trying to build Python-3.7.3 from source with ensurepip but i'm getting this error:
ModuleNotFoundError: No module named '_ctypes'
All of the answers online say that libffi-dev is needed but I have it installed and it still giving me this error.
root#4b6d672f1334:/Python-3.7.3# find / -name libffi.*
/usr/lib/pkgconfig/libffi.pc
/usr/lib/libffi.a
/usr/lib/libffi.so
/usr/lib/libffi.so.5.0.10
/usr/lib/libffi.so.5
/usr/share/info/libffi.info.gz
The build is in a container image from ubuntu:10.04.
It is that old on purpose because I'm using PyInstaller to compile the application and it needs to run on machines with an old glibc (2.11) and this image is the only one that I could find that have this old version.
I have done the same for Python-2.7.16 and it worked without any issues.
Update
Python-3.6.8 is working without any issues as well
I was able to find a solution here
The issue is probably with an old version of libffi-dev, the solution is to build and install libffi from source and then build Python3.7.3
Build libffi:
wget ftp://sourceware.org/pub/libffi/libffi-3.2.1.tar.gz
tar xzf libffi-3.2.1.tar.gz
cd libffi-3.2.1
./configure --disable-docs
make
make install
Build Python3.7.3:
wget https://www.python.org/ftp/python/3.7.2/Python-3.7.2.tgz
tar xzf Python-3.7.2.tgz &&
cd Python-3.7.2
export LD_LIBRARY_PATH=/usr/local/lib && \
export LD_RUN_PATH=/usr/local/lib && \
./configure --enable-optimizations --prefix=/usr/ --with-ensurepip=install --enable-shared LDFLAGS="-L/usr/local/lib" CPPFLAGS="-I /usr/local/lib/libffi-3.2.1/include"
make
make install
This is my solution on debian 6.0.60, according to Amir Rossert's solution, thanks a lot!
(1) Install libffi
tar zxf libffi-3.3.tar.gz
cd libffi-3.3
./configure
make
make install
(2) Install Python 3.8
tar zxf Python-3.8.5.tgz
cd Python-3.8.5
export LD_LIBRARY_PATH=/usr/local/lib && \
export LD_RUN_PATH=/usr/local/lib && \
./configure --prefix=/usr/local/python38 --with-openssl=/usr/local/openssl111 --enable-shared --enable-optimizations --with-system-ffi=/usr/local/lib/
make
make install
ln -s /usr/local/python38/bin/python3 /usr/local/bin/python3
ln -s /usr/local/python38/bin/pip3 /usr/local/bin/pip3
touch /etc/ld.so.conf/python38.conf
echo "/usr/local/python38/lib" > /etc/ld.so.conf/python38.conf
ldconfig
Ok, it works well.
The problem on Ubuntu 10.04 is the Cflags in libffi.pc from libffi-dev. The following fixes the issue without upgrading the libffi package :
$ sed -i 's/Cflags:.*/Cflags: -I${includedir}\/x86_64-linux-gnu/' /usr/lib/pkgconfig/libffi.pc
The following Dockerfile builds Python 3.9.7 on Ubuntu 10.04:
FROM ubuntu:10.04
ARG PYVER=3.9.7
# Change to old-releases
RUN sed -i 's/http:\/\/archive.ubuntu.com\//http:\/\/old-releases.ubuntu.com\//' /etc/apt/sources.list
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
libreadline-dev libssl-dev zlib1g-dev libffi-dev \
pkg-config build-essential
# Fix broken pkg-config for libffi (used by ctypes)
RUN sed -i 's/Cflags:.*/Cflags: -I${includedir}\/x86_64-linux-gnu/' /usr/lib/pkgconfig/libffi.pc
ADD https://www.python.org/ftp/python/$PYVER/Python-$PYVER.tgz /app/Python-$PYVER.tgz
RUN cd /app && tar zxvf Python-$PYVER.tgz && cd Python-$PYVER && \
./configure --prefix /opt/python3 && make && make install

How to install xapian with Python 3.6 on Ubuntu 16.04?

I installed Python 3.6 on Ubuntu 16.04 on Docker using the ppa:jonathonf/python-3.6 repository. Now I'd like to install xapian so I can use it with Python. I have not found any ready-made packages, so I am trying to build it from sources. I set PYTHON3 and PYTHON3_LIB parameters to point to Python 3.6. During the build process I get the following error:
ImportError: libxapian.so.30: cannot open shared object file: No such file or directory
I tried xapian versions 1.3.7 and 1.4.5 without luck.
How can I install xapian?
Here's a Dockerfile to reproduce my error:
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository ppa:jonathonf/python-3.6
RUN apt-get update \
&& apt-get install -y python3-pip docker.io python3.6 python3.6-dev software-properties-common \
python-software-properties build-essential wget unzip cmake python3-sphinx \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3.6 python
RUN python -m pip install --upgrade pip
# install xapian 1.4.5
RUN apt-get update && apt-get install -y curl uuid-dev zlib1g-dev
WORKDIR /root
RUN curl --silent --show-error --fail --next -O https://oligarchy.co.uk/xapian/1.4.5/xapian-core-1.4.5.tar.xz
RUN curl --silent --show-error --fail --next -O https://oligarchy.co.uk/xapian/1.4.5/xapian-bindings-1.4.5.tar.xz
RUN tar xvf xapian-core-1.4.5.tar.xz
RUN tar xvf xapian-bindings-1.4.5.tar.xz
WORKDIR /root/xapian-core-1.4.5
RUN ./configure && make && make install
WORKDIR /root/xapian-bindings-1.4.5
RUN ./configure PYTHON3=/usr/bin/python3.6 PYTHON3_LIB=/usr/lib/python3.6 --with-python3 && make && make install
RUN python -c "import xapian"
The problem is that the Xapian library (libxapian.so.30) is being installed into /usr/local/lib by default, but Ubuntu doesn't know that it's been put there yet. You can tell it by adding:
RUN ldconfig
after installing the core (so before you change WORKDIR to build the bindings).
There's some helpful information about ldconfig and library search paths on Ubuntu in the answers to this Unix Stackexchange question.

Categories