How to install Algo VPN remaining dependencies Terminal - python

I am trying to install the remaining dependencies for Algo VPN using Terminal via step 4 on https://github.com/trailofbits/algo
I believe I was in the folder above the one I was supposed to be in the last time I ran this, and I used the sudo command. So now I think there is an issue with the permissions that I don't know how to fix. It could be a simple fix, but I just don't want to create any more mess with the permissions.
Here is the code that I am running in terminal
$ python -m virtualenv --python=`which python2` env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
I receive the error -
Running virtualenv with interpreter /usr/bin/env
env: /Users/mark/Library/Python/2.7/lib/python/site-packages/virtualenv.py: Permission denied

Below is the code that I was using to try to install the remaining dependencies.
$ python -m virtualenv --python=`which python2` env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
When I ran this about a week ago and I was able to get it to work and I believe it looked like this. I thought I just left it with no version of Python believing it would default to the current version and I believe it worked.
$ python -m virtualenv --python=env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
So I decided to try
$ python -m virtualenv --python=python2.7 env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
And it worked.
So maybe I had an extra space so it looked like
$ python -m virtualenv --python= env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
or maybe I did in fact need the python2.7
$ python -m virtualenv --python=python2.7 env &&
source env/bin/activate &&
python -m pip install -U pip virtualenv &&
python -m pip install -r requirements.txt
I will note that I used terminal to show show hidden files by
defaults write com.apple.finder AppleShowAllFiles YES
and then I navigated in Finder to
/Users/mark/Library/Python/2.7/lib/python/site-packages/virtualenv.py
and it showed that I had the correct permissions. So I don't think had to do with using sudo previously.

Related

No module named PyInstaller' after what appears to be a successful install

I am building a docker image. Within it I am trying to install a number of python packages within one RUN. All packages within that command are installed correctly, but PyInstaller is not for some reason, although the build logs make me think that it should have been: Successfully installed PyInstaller
The minimal Dockerfile to reproduce the issue:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip \
unixodbc-dev
RUN python3 -m pip install --no-cache-dir pyodbc==4.0.30 && \
python3 -m pip install --no-cache-dir Cython==0.29.19 && \
python3 -m pip install --no-cache-dir PyInstaller==3.5 && \
python3 -m pip install --no-cache-dir selenium==3.141.0 && \
python3 -m pip install --no-cache-dir bs4==0.0.1
RUN python3 -m PyInstaller
The last run command fails with /usr/bin/python3: No module named PyInstaller, all other packages can be imported as expected.
The issue is also reproducible with this Dockerfile:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip
RUN python3 -m pip install --no-cache-dir PyInstaller==3.5
RUN python3 -m PyInstaller
What is the reason for this issue and what is the fix?
EDIT:
When I run the layer before the last RUN, I can see that no PyInstaller is installed, but I can run python3 -m pip install --no-cache-dir PyInstaller==3.5 and then it works without changing anything else.
Although I do not fully undestand the reason behind it, it seems like the --no-cache-dir option was causing the issue. The dockerfile below builds without an issue:
FROM debian:buster
RUN apt-get update && \
apt-get install -y \
python3 \
python3-pip
RUN python3 -m pip install PyInstaller==3.5
RUN python3 -m PyInstaller --help
Edit: This seems to be an issue outside of PyInstaller, but with the specific version of pip, see https://github.com/pyinstaller/pyinstaller/issues/6963 for details.
I'm not familiar with PyInstaller but in their requirements page they wrote:
If the pip setup fails to build a bootloader, or if you do not use pip
to install, you must compile a bootloader manually. The process is
described under Building the Bootloader.
Have you try that in your Dockerfile?
(And you're totally right, it should fail... )

How do I install tensorflow in a Docker image w/ venv?

I have the following code...
FROM python:latest
ENV VIRTUAL_ENV "/venv"
RUN python -m venv $VIRTUAL_ENV
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
# Python commands run inside the virtual environment
RUN /venv/bin/python3 -m pip install --upgrade pip
RUN /venv/bin/pip3 install tensorflow
But when I run I get...
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
I tried using the tensorflow image like...
FROM tensorflow/tensorflow:latest
ENV VIRTUAL_ENV "/venv"
RUN python -m venv $VIRTUAL_ENV
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
# Python commands run inside the virtual environment
RUN /venv/bin/python3 -m pip install --upgrade pip
but then I get...
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt-get install python3-venv
So I change to
FROM tensorflow/tensorflow:latest
RUN apt-get install python3-venv -y
ENV VIRTUAL_ENV "/venv"
RUN python -m venv $VIRTUAL_ENV
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
# Python commands run inside the virtual environment
RUN /venv/bin/python3 -m pip install --upgrade pip
But I get...
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/universe/p/python3.6/python3.6-venv_3.6.9-1~18.04ubuntu1.1_amd64.deb 404 Not Found [IP: 91.189.88.152 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
How do I handle this?
Per #drum comment this works...
FROM tensorflow/tensorflow:latest
RUN apt-get update && apt-get upgrade -y
RUN apt-get install python3-venv -y
ENV VIRTUAL_ENV "/venv"
RUN python -m venv $VIRTUAL_ENV
ENV PATH "$VIRTUAL_ENV/bin:$PATH"
# Python commands run inside the virtual environment
RUN /venv/bin/python3 -m pip install --upgrade pip

Setting alias in Dockerfile not working: command not found

I have the following in my Dockerfile:
...
USER $user
# Set default python version to 3
RUN alias python=python3
RUN alias pip=pip3
WORKDIR /app
# Install local dependencies
RUN pip install --requirement requirements.txt --user
When building the image, I get the following:
Step 13/22 : RUN alias pip=pip3
---> Running in dc48c9c84c88
Removing intermediate container dc48c9c84c88
---> 6c7757ea2724
Step 14/22 : RUN pip install --requirement requirements.txt --user
---> Running in b829d6875998
/bin/sh: pip: command not found
Why is pip not recognized if I set an alias right on top of it?
Ps: I do not want to use .bashrc for loading aliases.
The problem is that the alias only exists for that intermediate layer in the image. Try the following:
FROM ubuntu
RUN apt-get update && apt-get install python3-pip -y
RUN alias python=python3
Testing here:
❰mm92400❙~/sample❱✔≻ docker build . -t testimage
...
Successfully tagged testimage:latest
❰mm92400❙~/sample❱✔≻ docker run -it testimage bash
root#78e4f3400ef4:/# python
bash: python: command not found
root#78e4f3400ef4:/#
This is because a new bash session is started for each layer, so the alias will be lost in the following layers.
To keep a stable alias, you can use a symlink as python does in their official image:
FROM ubuntu
RUN apt-get update && apt-get install python3-pip -y
# as a quick note, for a proper install of python, you would
# use a python base image or follow a more official install of python,
# changing this to RUN cd /usr/local/bin
# this just replicates your issue quickly
RUN cd "$(dirname $(which python3))" \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \ # this will properly alias your python
&& ln -s python3-config python-config
RUN python -m pip install -r requirements.txt
Note the use of the python3-pip package to bundle pip. When calling pip, it's best to use the python -m pip syntax, as it ensures that the pip you are calling is the one tied to your installation of python:
python -m pip install -r requirements.txt
I managed to do that by setting aliases in the /root/.bashrc file.
I have followed this example to do get an idea on how to do that
PS I am using that in a jenkins/jenkins:lts container so as I looked around and as #C.Nivs said:
The problem is that the alias only exists for that intermediate layer in the image
So in order to do that I had to find a way to add the following commands:
ENV FLAG='--kubeconfig /root/.kube/config'
RUN echo "alias helm='helm $FLAG'" >>/root/.bashrc
CMD /bin/bash -c "source /root/.bashrc" && /usr/local/bin/jenkins.sh
for the CMD part you have to check the image you are using so you wouldn't interrupt its normal behaviour.

install python package at current directory

I am mac user, used to run pip install with --user, but recently after brew update, I found there are some strange things, maybe related.
Whatever I tries, the packages are always installed to ~/Library/Python/2.7/lib/python/site-packages
Here are the commands I run.
$ python -m site --user-site
~/Library/Python/2.7/lib/python/site-packages
$ pip install --user -r requirements.txt
$ PYTHONUSERBASE=. pip install --user -r requirements.txt
So what should be the problem?
I used for lambda zip packaging
Updates:
If using Mac OS X and you have Python installed using Homebrew (see Homebrew), the accepted command will not work. A simple workaround is to add a setup.cfg file in your /path/to/project-dir with the following content.
[install]
prefix=
https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html
You can use the target (t) flag of pip install to specify a target location for installation.
In use:
pip install -r requirements.txt -t /path/to/directory
to the current directory:
pip install -r requirements.txt -t .

How can I upgrade pip inside a venv inside a Dockerfile?

While running
$ sudo docker build -t myproj:tag .
I am hit with the message
You are using pip version 10.0.1, however version 18.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
and given recent occasional subtleties manifesting themselves with the error:
"/usr/bin/pip" "from pip import main" "ImportError: cannot import .."
I'd rather yield and indeed upgrade.
And so I add the pip upgrade command in the DockerFile, after the venv is built, since the pip that matters is the one inside the venv (am I getting this right?). So my Dockerfile now has this:
...
RUN python -m venv venv
RUN pip install --upgrade pip
...
But doing so does not avoid the "You are using pip 10.x" message. What am I missing?
Update
Though a promising suggestion, neither
RUN source venv/bin/activate
RUN pip install --upgrade pip
nor
RUN source venv/bin/activate
RUN python -m pip install --upgrade pip
eliminate the "You are using pip version 10.0.1, ..." message.
The single easiest answer to this is to just not bother with a virtual environment in a Docker image. A virtual environment gives you an isolated filesystem space with a private set of Python packages that don't conflict with the system install, but so does a Docker image. You can just use the system pip in a Docker image and it will be fine.
FROM python:3.7
RUN pip install --upgrade pip
WORKDIR /usr/src/app
COPY . .
RUN pip install .
CMD ["myscript"]
If you really want a virtual environment, you either need to specifically run the wrapper scripts from the virtual environment's path
RUN python -m venv venv
RUN venv/bin/pip install --upgrade pip
or run the virtual environment "activate" script on every RUN command; the environment variables it sets won't carry over from one step to another. (Each RUN command in effect does its own docker run; docker commit sequence under the hood and will launch a new shell in a new container; the Dockerfile reference describes this a little bit.)
RUN python -m venv venv
RUN . venv/bin/activate \
&& pip install --upgrade pip
COPY . .
RUN . venv/bin/activate \
&& pip install .
CMD ["venv/bin/myscript"]
Trying to activate the virtual environment in its own RUN instruction does nothing beyond generate a no-op layer.
# This step does nothing
RUN . venv/bin/activate
# And therefore this upgrades the system pip
RUN pip install --upgrade pip
Before you can use your virtual environment venvyou need to activate it with
On Windows:
venv\Scripts\activate.bat
On Unix or MacOS, run:
source venv/bin/activate
Please note that venv is the name of your environment. You created this environment with RUN python -m venv venv. I strongly recommend to use a other name.
Then you can upgrade with python -m pip install --upgrade pip
After you create a virtual environment in a Docker container through
RUN python -m venv venv
then run either
RUN venv/bin/pip install --upgrade pip
or
RUN venv/bin/python -m pip install --upgrade pip
but neither
RUN pip install --upgrade pip
nor
RUN python -m pip install --upgrade pip

Categories