since Google Colab upgraded from Python 3.7 to 3.8, I can't seem to load my saved BERTopic model which trained on Google Colab prior to the upgrade.
Also, the "use fallback runtime version" feature is not available in the Command Palette anymore [It was available only up until Mid-December].
Is there, any possible way to load the old BERTopic model in Google Colab? I can't figure out the exact source of conflict.
I ran these codes to try in Python 3.7 version. I'm having trouble importing BERTopic.
!sudo apt-get update -y
!sudo apt-get install python3.7
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1
!sudo update-alternatives --config python3
!apt-get install python3-pip
!python -m pip install --upgrade pip --user
!pip install bertopic
ERROR: Could not build wheels for hdbscan, which is required to install pyproject.toml-based projects.
you can run install build essential before you install hdbscan
apt-get update && apt-get install -y build-essential
Related
I am trying to install tensorflow<2.0,>=1.15 pip package during the Docker build. I am not able to build it, and I am getting this error in my terminal during the pip installation:
> [12/12] RUN pip3 install --no-cache-dir -r requirements.txt:
#16 0.488 ERROR: Could not find a version that satisfies the requirement tensorflow<2.0,>=1.15 (from versions: none)
#16 0.489 ERROR: No matching distribution found for tensorflow<2.0,>=1.15
To replicate the error:
Dockerfile:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get install -y unzip
RUN apt-get install -y build-essential
RUN apt-get install -y python-all-dev
RUN apt-get install -y libexiv2-dev
RUN apt-get install -y libboost-python-dev
RUN apt-get install -y wget
COPY . /usr/src/app
WORKDIR /usr/src/app
ENV PYTHONUNBUFFERED True
RUN pip3 install --upgrade pip
RUN pip3 install --no-cache-dir -r requirements.txt
requirements.txt:
tensorflow>=1.15,<2.0
I have tried to build FROM (first line in the Dockerfile) other Python versions, either 3.7 or lower, never newer. Still the same result.
I use Docker Desktop for Mac M1 version 4.3.2, Engine version 20.10.11.
When I run it on Fedora Linux, I can build it successfully.
I suspect that this can be Docker-related. There might be a difference between Docker Desktop and Docker for Linux. But I might be doing something wrong.
Have some of you folks ever encountered the same error? How did you solve this? Thanks for any tips.
Tensorflow 1.x does not support the Mac M1 chip. It is recommended to install Tensorflow >=2.5 on Mac M1
Take a look at these release notes from Mac Tensorflow:
https://github.com/apple/tensorflow_macos/releases
How would you explain docker build failure with Dockerfile1, and it's success with Dockerfile2 (see below).
1)
// Dockerfile1
FROM ubuntu:16.04
RUN apt-get -y update && \
apt-get -y install python-pip python-dev build-essential && \
pip install --upgrade pip && \
pip install --upgrade virtualenv
docker build . fails with the following err
Collecting pip
Downloading
https://files.pythonhosted.org/packages/0f/74/ecd13431bcc456ed390b44c8a6e917c1820365cbebcb6a8974d1cd045ab4/pip-10.0.1-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 8.1.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside
environment /usr
Successfully installed pip-10.0.1
Traceback (most recent call last):
File "/usr/bin/pip", line 9, in <module>
from pip import main
ImportError: cannot import name main
The command '/bin/sh -c apt-get -y update && apt-get -y install
python-pip python-dev build-essential && pip install --upgrade pip && pip install --upgrade virtualenv && virtualenv /venv' returned a non-zero code: 1
However, it succeeds if we split it into two RUN.
2)
// Dockerfile2
FROM ubuntu:16.04
RUN apt-get -y update && \
apt-get -y install python-pip python-dev build-essential && \
pip install --upgrade pip
RUN pip install --upgrade virtualenv
The installation failure for pip is related to this reported issue. So my questions:
Why does docker build fail in the first case? If we just run those command in bash, there wont be any error.
Why does docker build succeed in the second case? How is it related to layering concept in docker?
Why specifying pip version in Dockerfile1 (i.e. pip install --upgrade pip=0.9.3) solves the problem too?
Update (May 6, 2018):
I've figured out the issue. What happens here is as below:
apt-get -y install python-pip installs an old version of pip whose shim script import pip's main directly.
pip install --upgrade pip installs pip 10.0.1 and moves main into an internal directory _internal. It adds its shim script to PATH.
Calling pip fails as it still calls the old shim script as it's path is cached. Running hash -d pip in between fixes the issue.
So apparently, splitting install and update into two RUN sections has similar effect as hash -d pip. Workarounds (also suggested by Andriy Maletsky) are 1) pin pip update to 9.0.3, or 2) install (latest) pip from source in the first place, or 3) use hash -r in between, or 4) use another RUN command for later use of pip.
The problem is that pip executable (/usr/bin/pip) breaks while updating pip from version 9 to version 10.
Possible solutions:
1. Do not update and use pip v9
2. Do not use apt-get to install pip. Download it manually.
Why does docker build fail in the first case? If we just run those command in bash, there wont be any error.
No, there will be an error. I ran those commands inside docker run --rm -it ubuntu:16.04 bash and got it.
Why does docker build succeed in the second case? How is it related to layering concept in docker?
I believe you made a mistake somewhere in second RUN and it's silencing an error (in that place which you didn't provide). For example, this will work (because ; used instead of && and execution doesn't break after bad command):
RUN pip install --upgrade virtualenv && \
virtualenv /venv; source /venv/bin/activate
Why specifying pip version in Dockerfile1 (i.e. pip install --upgrade pip=0.9.3) solves the problem too?
Because this pip bug appeared in version 10.
P.S. You should not update or manually change files you added to your system via apt-get (you are doing this via pip install --upgrade pip).
I wish to install opencv-python via the command in Ubuntu 15.04 machine
pip3 install opencv-python
But as soon as I run this command I get the following error :
Downloading/unpacking opencv-python
Could not find any downloads that satisfy the requirement opencv-python
Cleaning up...
No distributions at all found for opencv-python
Storing debug log for failure in /home/Nadeem/.pip/pip.log
Any help would be much appreciated.
Thanks!!
You can install opencv from source.
Follow this link to do so.
Or you may need to upgrade your pip3 using the following command
pip3 install --upgrade pip
EDIT
For completeness(and in case the link is broken) I've listed here steps to compile and install OpenCV from source on Ubuntu(Tested on Ubuntu 14.04 LTS with python 3).
Step 1 Update the packages
sudo apt-get update
sudo apt-get upgrade
Step 2 Install dependencies
sudo apt-get install build-essential cmake git pkg-config # Developer tools required to compile opencv
sudo apt-get install libjpeg8-dev libtiff4-dev libjasper-dev libpng12-dev # Libraries required to read various image format from disk
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev # Libraries required to read various video formats
sudo apt-get install libgtk2.0-dev # Required by opencv for GUI features
sudo apt-get install libatlas-base-dev gfortran # Packages used by opencv to optimize various functions.
pip3 install --upgrade pip
Step 3 setup virtual environment(using conda)
conda create -n opencv-exmaple-env python=3.6
source activate opencv-exmaple-env # Activate the envirnoment
Step 4 Install packages required to compile opencv
sudo apt-get install python3.6-dev # If the python version is not 3.6 then make changes to this command accordingly.
pip install numpy # This should be done after the environment in Step 3 is activated
Step 5: Build and install OpenCV 3.0 with Python 3.4+ bindings
5.1 Clone the opencv source
cd ~
mkdir opencv-source
cd opencv-source
git clone https://github.com/Itseez/opencv.git
cd opencv
git checkout 3.3.0 # Branch you want to compile from
5.2 Clone Opencv Contrib rep
Contains exptra functionalities such as standard keypoint detectors and local invariant descriptors (such as SIFT, SURF, etc.)
cd ~
mkdir opencv-contrib
cd opencv-contrib
git clone https://github.com/Itseez/opencv_contrib.git
cd opencv_contrib
git checkout 3.3.0 # The version you want to compile
5.3 Compile,build and install
cd ~/opencv-source/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_C_EXAMPLES=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv-contrib/opencv_contrib/modules \
-D BUILD_EXAMPLES=ON ..
make -j4
sudo make install
sudo ldconfig
5.4 Link installed opencv object file to python site packages
ln -s /usr/local/lib/python3.6/site-packages/cv2.so /path-to-python-sitepackages-of-the-environment/cv2.so
6 Verify installation
import cv2
If the above code runs without error then opencv is installed successfully.
At first upgrade pip using sudo.
arsho:~/workspace $ sudo pip3 install --upgrade pip
Successfully installed pip
Now install opencv-python again using sudo command.
arsho:~/workspace $ sudo pip3 install opencv-python
Successfully installed numpy-1.13.1 opencv-python-3.3.0.10
Finally check the opencv-python version and location information using pip.
arsho:~/workspace $ pip3 show opencv-python
---
Name: opencv-python
Version: 3.3.0.10
Location: /usr/local/lib/python3.4/dist-packages
Requires: numpy
I have tested this using Ubuntu 14.04.5 LTS in https://c9.io/.
I try to get a script running, which needs python3 and networkx
python3-networkx is not in the apt-repository, so I installed it using:
apt-get install python-networkx
But still my script crashed saying networkx is not found.
How can I install the python3 version?
For installing packges in Debian, you may run:
sudo apt-get update
sudo apt-cache search networkx
Which shows this:
python-networkx - tool to create, manipulate and study complex networks
python-networkx-doc - tool to create, manipulate and study complex networks - documentation
python3-networkx - tool to create, manipulate and study complex networks (Python3)
Then you can run:
sudo apt-get install python-networkx
Alternatively, you can use pip:
sudo apt-get update
sudo apt-get install python-pip
sudo pip install networkx
I have tried this in jessie and it worked with python 2.7.
For installing python3 you can use:
sudo apt-get update
sudo apt-get install python3 python3-pip
and for installing networkx:
In Debian jessie:
sudo pip3 install networkx
In Debian Wheezy:
sudo pip-3.2 networkx
You can check more info about apt-get here or man apt-get in linux terminal.
Also you can check pip documentation here.
I came across a tutorial which lists a number of libraries to install before installing Django (I am using Ubuntu 14.04, Python3, and Django 1.8):
$ sudo apt-get update
$ sudo apt-get -y upgrade
$ sudo apt-get install -y build-essential
$ sudo apt-get install python-setuptools python-dev python3.4-dev python-software-properties libpq-dev
$ sudo apt-get install libtiff4-dev libjpeg8-dev zlib1g-dev libfreetype6-dev liblcms2-dev libwebp-dev tcl8.5-dev tk8.5-dev
$ sudo apt-get build-dep python-imaging
But other tutorials may not list so many libraries to install. I wonder which are absolutely necessary, and others may be omitted?
You only need to install these dependencies if you want image processing via pillow and if you plan on installing it via pip (the Python package manager) rather than apt-get (Ubuntu's package manager).
Since you're using a virtualenv, you will need to install this package from source. The following commands will get the build dependencies and install pillow using pip.
$ sudo apt-get build-dep python3-imaging
$ pip install pillow
Note that pillow is a beast to compile. Be prepared to wait several minutes.