I want to install several private python packages when I create an instance (AI platform).
I had the following startup script but it did not install what I needed (but also didnt show any errors):
startup_script.sh:
pip install my_custom_libraries
why does it not work and what do I need to do to make it work?
you check the start up script logs at :
Compute Engine > (Your Instance Name) > Logs > Serial port 1 (console)
also, I am not sure your install have by default python and pip installed ?
startup_script.sh:
#! /bin/bash
apt-get update
apt-get install -yq git python python-pip
pip install --upgrade pip virtualenv
pip install my_custom_libraries
check your python and pip set at the proper path
I'm building a docker image that suppose to run a python script that depends on some packages (numpy) I install during the build.
During build everything appears to be installed correctly but when I run the container it behaves like those packages were never installed. What seems to be the problem with my code ?
My docker file looks like this :
FROM myimage as intermediate
WORKDIR ./app
COPY ./mathServer ./mathServer/
RUN apt-get update
RUN apt-get install sudo
RUN sudo apt-get install python3-pip -y
RUN sudo pip3 install numpy
RUN sudo pip3 install httpserver
RUN pip3 list
WORKDIR ./app
COPY --from=intermediate ./app/* ./
CMD ["sh","-c","python3 mathServer/mathServer.py"]
I would expect docker run myimage to run mathServer.py successfully but instead it complains about numpy package.
"importError: No module named 'numpy''"
Also if I replace command "python3 mathServer/mathServer.py" with command "pip3 list" pip3 command does not exist. Somehow the packages installed during build are not available when I'm actually running the container.
Please check your docker build log. Numpy requests c compiler and fortran compiler to build and install. It is likely the installation was not successful.
Consider try pre-build dockers such as https://hub.docker.com/r/continuumio/miniconda/, and add numpy via RUN <PATH_TO>/conda install numpy -y
Or https://hub.docker.com/r/continuumio/anaconda3 that already have numpyinstalled.
I have a python3.6 script that needs to get content from a blob storage in Azure which I want to run in a Ubuntu 16.04 docker container.
The problem
I am using this dockerfile because I am also using pyodbc to connect to SQL Server. In my requirements file I have listed azure.storage which is installed when the docker image is built. But when trying to run the script I get the following error:
root#b61c65dadb5d:/app# python3 val.py
Traceback (most recent call last):
File "val.py", line 12, in <module>
from azure.storage.blob import BlockBlobService
File "/usr/local/lib/python3.6/dist-packages/azure/storage/__init__.py", line 21, in <module>
from .models import (
File "/usr/local/lib/python3.6/dist-packages/azure/storage/models.py", line 27, in <module>
from cryptography.hazmat.primitives.keywrap import(
File "/usr/local/lib/python3.6/dist-packages/cryptography/hazmat/primitives/keywrap.py", line 12, in <module>
from cryptography.hazmat.primitives.constant_time import bytes_eq
File "/usr/local/lib/python3.6/dist-packages/cryptography/hazmat/primitives/constant_time.py", line 11, in <module>
from cryptography.hazmat.bindings._constant_time import lib
ImportError: No module named '_cffi_backend'
What I have tried
I have found a couple suggestions after some searching, one of which suggested to run pip install cffi. When trying this I get:
root#b61c65dadb5d:/app# pip3 install cffi
Requirement already satisfied: cffi in /usr/local/lib/python3.6/dist-packages (1.12.2)
Requirement already satisfied: pycparser in /usr/local/lib/python3.6/dist-packages (from cffi) (2.19)
Same goes for pip install cryptography
Because python had trouble finding the azure module to begin with I have this at the beginning of my script, so it should be able to find anything located in that directory:
import sys
sys.path.append('/usr/local/lib/python3.6/dist-packages')
(I know I can do this in the Dockerfile, and I will)
Others say that adding import cffi in the python script solves the issue. It does not for me.
How to reproduce
This problem can easily be reproduced with this minimal python script:
import sys
sys.path.append('/usr/local/lib/python3.6/dist-packages')
from azure.storage.blob import BlockBlobService
Then build an image based on Ubuntu 16.04, installing python 3.6, and install azure.storage with pip. Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y software-properties-common
RUN add-apt-repository -y ppa:jonathonf/python-3.6
RUN apt-get update && apt-get install -y \
python3.6 \
python3.6-dev \
python3-pip \
python3-setuptools \
python3-wheel \
--no-install-recommends && \
python3.6 -m pip install --upgrade pip && \
rm -rf /var/lib/apt/lists/* && \
alias python=python3.6
RUN pip3 install azure.storage
COPY /app /app
WORKDIR /app
Either run it interactively or add CMD for running the script. Note that the python script must be located in a folder named "app" in the same directory as the Dockerfile.
If you are using interacive you can try pip3 install cffi to see that it's already installed.
Side note
I also want to mention that I have a similar problem when attempting to connect to a service bus on azure. But I will create another question for that spesific problem if I feel the need later.
I'm afraid there is not a package named azure.storage in PyPI, which should be azure-storage, so the command RUN pip3 install azure.storage is incorrect. Actually, when command pip install azure.storage to install Azure Storage SDK for Python, it will also download azure_storage-0.36.0-py2.py3-none-any.whl to install azure-storage package with some issues happened, as the figure below.
I think the issue shown in the red frame of the picture above would break the progress in a container to next install some required packages for azure-storage like cffi, cryptography, etc. So I suggested you can use the corrent package name azure-storage to try again.
I was not able to make it work using the ubuntu 16.04 image, however, it does work using the python 3.6 image.
As have been pointed out by Peter Pan in his answer the package name was wrong. It should be azure-storage not azure.storage. Still the problem persisted after changing it to the correct name (it appears that it was still able to get the correct package even with the wrong name). It might be a local issue on my machine, I don't know, I can only speculate at this point.
Anyway, it does work using the python 3.6 image, which is currently based on debian 9, without any problem so that solves the issue for me.
If anyone else is experiencing similar issues while using python to connect to azure blob storage or other azure features as well as azure SQL, here is what I ended up using:
FROM python:3.6
RUN apt-get update && apt-get install -y \
curl apt-utils apt-transport-https debconf-utils gcc build-essential
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get -y install msodbcsql17
RUN apt-get install -y unixodbc-dev
COPY /app /app
WORKDIR /app
RUN pip install -r requirements
CMD ["python", "val.py"]
Azure SDK should work out-of-the-box, all the other stuff is for connecting to SQL with ODBC.
Arguably stuff like gcc should not be included in a production image, but that's not really relevant to this question.
I am trying to run cv2, but when I try to import it, I get the following error:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
The suggested solution online is installing
apt install libgl1-mesa-glx
but this is already installed and the latest version.
NB: I am actually running this on Docker, and I am not able to check the OpenCV version. I tried importing matplotlib and that imports fine.
Add the following lines to your Dockerfile:
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y
These commands install the cv2 dependencies that are normally present on the local machine, but might be missing in your Docker container causing the issue.
[minor update on 20 Jan 2022: as Docker recommends, never put RUN apt-get update alone, causing cache issue]
Even though the above solutions work. But their package sizes are quite big.
libGL.so.1 is provided by package libgl1. So the following code is sufficient.
apt-get update && apt-get install libgl1
This is a little bit better solution in my opinion. Package python3-opencv includes all system dependencies of OpenCV.
RUN apt-get update && apt-get install -y python3-opencv
RUN pip install opencv-python
Try installing opencv-python-headless python dependency instead of opencv-python. That includes a precompiled binary wheel with no external dependencies (other than numpy), and is intended for headless environments like Docker. This saved almost 700mb in my docker image compared with using the python3-opencv Debian package (with all its dependencies).
The package documentation discusses this and the related (more expansive) opencv-contrib-python-headless pypi package.
Example reproducing the ImportError in the question
# docker run -it python:3.9-slim bash -c "pip -q install opencv-python; python -c 'import cv2'"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/cv2/__init__.py", line 5, in <module>
from .cv2 import *
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
# docker run -it python:3.9-slim bash -c "pip -q install opencv-python-headless; python -c 'import cv2'"
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
For me, the only WA that worked is following:
# These are for libGL.so issues
# RUN apt-get update
# RUN apt install libgl1-mesa-glx
# RUN apt-get install -y python3-opencv
# RUN pip3 install opencv-python
RUN pip3 install opencv-python-headless==4.5.3.56
If you're on CentOS, RHEL, Fedora, or other linux distros that use yum, you'll want:
sudo yum install mesa-libGL -y
In my case it was enough to do the following which also saves space in comparison to above solutions
RUN apt-get update && apt-get install -y --no-install-recommends \
libgl1 \
libglib2.0-0 \
Put this in the Dockerfile
RUN apt-get update
RUN apt install -y libgl1-mesa-glx
Before the line
COPY requirements.txt requirements.txt
For example
......
RUN apt-get update
RUN apt install -y libgl1-mesa-glx
COPY requirements.txt requirements.txt
......
I was getting the same error when I was trying to use OpenCV in the GCP Appengine Flex server environment. Replacing "opencv-python" by "opencv-python-headless" in the requirements.txt solved the problem.
The OpenCV documentation talks about different packages for desktop vs. Server (headless) environments.
I met this problem while using cv2 in a docker container. I fixed it by:
pip install opencv-contrib-python
install opencv-contrib-python rather than opencv-python.
Here is the solution you need:
pip install -U opencv-python
apt update && apt install -y libsm6 libxext6 ffmpeg libfontconfig1 libxrender1 libgl1-mesa-glx
had the same issue on centos 8 after using pip3 install opencv on a non gui server which is lacking all sorts of graphics libraries.
dnf install opencv
pulls in all needed dependencies.
"installing opencv-python-headless instead of opencv-python"
this works in my case!
I was deploying my website to Azure and pop up this exception:
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
then I uninstall the opencv-python package, install the later one,
freeze the requirements and then deploy it again,
then problem solved.
For a raspberry pi, put this , work for me :
sudo apt-get install ffmpeg libsm6 libxext6 -y
For me, the problem was related to proxy setting. For pypi, I was using nexus mirror to pypi, for opencv nothing worked. Until I connected to a different network.
In rocky linux 9 i resolved the error using command
dnf install mesa-libGLU
Use opencv-python-headless if you're using docker or in server environment.
I got the same issue on Ubuntu desktop, and none of the other solutions worked for me.
libGL.so.1 was correctly installed but for some reason Python wasn’t able to see it:
$ ldconfig -p | grep libGL.so.1
libGL.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libGL.so.1
The only solution that worked was to force it in LD_LIBRARY_PATH. Add the following in your ~/.bashrc then run source ~/.bashrc or restart your shell:
export LD_LIBRARY_PATH="/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH"
I understand that LD_LIBRARY_PATH is bad but for me this is the only solution that works.
When I run apt-get -y install python3, it installs Python 3.5.2. How can I install Python 3.5.5?
Just to mention that I run these commands in Docker:
RUN apt-get update
RUN apt-get -y install python3 python3-pip wget default-jre
RUN pip3 install --upgrade pip
RUN pip3 install virtualenv
Thanks.
Two options:
You'll need to find a repository that has that version and add the repository in your container build script. There's a nice explanation of how to add a repository for apt-get on Ask Ubuntu
Build it from source, using the official repository. This is explained on Stack Overflow albeit for a different Python version
Also, if you may be able to find a docker image that already has Python 3.5.5 in it on Docker Hub. I didn't see one with a quick search, but it might be worth a closer look.