Access ssh keys during docker-compose build in flask project - python

Generally, my question is about being able to access ssh keys during docker-compose build.
I'm able to access my ssh keys when running docker-compose up using volume mapping in my docker-compose.yml file, looks like:
services:
flask:
volumes:
- ~/.ssh:/root/.ssh
But I cannot access them during docker-compose build
More Specifics
I am running a python flask app. I want to install a private git repo as a pip package. So I added this line to requirements.txt
git+ssh://git#github.com/username/repo_name.git#branch_name
If I run bash in the service through docker-compose run flask bash then I can manually run pip install git+ssh://git#github.com/username/repo_name.git#branch_name and that works, because I have the volume mapping to the ssh keys.
But when I run docker-compose build, it cannot access the private git repo because it doesn't have access to the ssh keys.
Anyone know if there's a way to give docker-compose build access to ssh keys, or another way around this problem?

volumes are attached at run time of your container, NOT at build time.
Solution:
Copy your .ssh next to your Dockerfile and do the following in your Dockerfile:
COPY ./.ssh /root/.ssh
Be careful:
Like this, your .ssh directory will be available for everyone who has access to your Docker image. So either create a technical user and copy his .ssh into the image or (better) do something like this:
FROM baseimage AS builder
COPY ./.ssh /root/.ssh
RUN your commands
FROM baseimage
COPY --from=builder some-directory some-directory
Edit:
Another option is to use username:password instead of ssh key authentication. This way, you would use build args in your Dockerfile like:
FROM baseimage
ARG GIT_USER
ARG GIT_PASS
RUN git clone http://${GIT_USER}:${GIT_PASS}#your-git-url.git
and build it with args docker build --build-args GIT_USER=<user> --build-args GIT_PASS=<pass> .

ssh is harder to setup than just using the username:password. Here is the line I added to requirements.txt that got it to work:
-e git+https://<username>:<password>#github.com/<path to git repo>.git#egg=<package_name>
If you want to get a specific tag, then you can do this:
-e git+https://<username>:<password>#github.com/<path to git repo>.git#<tag_name>#egg=<package_name>
You can use any username and password that has access to the git repo, but I recommend that you don't use your main git account for obvious security reasons. Create a new user specifically for this project, and grant them access rights.

Related

Docker run does not produce any endpoint

I am trying to dockerize this repo. After building it like so:
docker build -t layoutlm-v2 .
I try to run it like so:
docker run -d -p 5001:5000 layoutlm-v2
It downloads the necessary libraries and packages:
And then nothing... No errors, no endpoints generated, just radio silence.
What's wrong? And how do I fix it?
You appear to be expecting your application to offer a service on port 5000, but it doesn't appear as if that's how your code behaves.
Looking at your code, you seem to be launching a service using gradio. According the quickstart, calling gr.Interface(...).launch() will launch a service on localhost:7860, and indeed, if you inspect a container booted from your image, we see:
root#74cf8b2463ab:/app# ss -tln
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 2048 127.0.0.1:7860 0.0.0.0:*
There's no way to access a service listening on localhost from outside the container, so we need to figure out how to fix that.
Looking at these docs, it looks like you can control the listen address using the server_name parameter:
server_name
to make app accessible on local network, set this to "0.0.0.0". Can be set by environment variable GRADIO_SERVER_NAME. If None, will use "127.0.0.1".
So if we run your image like this:
docker run -p 7860:7860 -e GRADIO_SERVER_NAME=0.0.0.0 layoutlm-v2
Then we should be able to access the interface on the host at http://localhost:7860/...and indeed, that seems to work:
Unrelated to your question:
You're setting up a virtual environment in your Dockerfile, but you're not using it, primarily because of a typo here:
ENV PATH="VIRTUAL_ENV/bin:$PATH"
You're missing a $ on $VIRTUAL_ENV.
You could optimize the order of operations in your Dockerfile. Right now, making a simple change to your Dockerfile (e.g, editing the CMD setting) will cause much of your image to be rebuilt. You could avoid that by restructuring the Dockerfile like this:
FROM python:3.9
# Install dependencies
RUN apt-get update && apt-get install -y tesseract-ocr
RUN pip install virtualenv && virtualenv venv -p python3
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN git clone https://github.com/facebookresearch/detectron2.git
RUN python -m pip install -e detectron2
COPY . /app
# Run the application:
CMD ["python", "-u", "app.py"]

File created by docker is uneditable on host

I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.

Docker container exibits different behavour when run automatically

I have a basic Python Docker container that uses the O365 library to retrieve mail from Office365.
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./main ./main
CMD [ "python", "./main/main.py"]
The first time you run this O365 library, you need to authorize it and it stores a o365_token.txt which it uses after that. That looks like this:
Visit the following url to give consent:
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?resp....
Paste the authenticated url here:
This also happened on my new Docker so I logged in to it through Bash:
docker run -it hvdveer/e2t-python bash
But now when I manually run it, it just utilizes the existing token and it works without verification. Deleting the token files and authorizing it again also doesn't work. Why does it ask for authorization when I run it automatically, but not when run it manually? Are these different users? How to I fix this?
I fixed it!
The CMD is run from the root directory so it's looking for the token there. By changing the WORKDIR to the main of my program it now finds the token:
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
WORKDIR /main
ADD ./main .
CMD [ "python", "./main.py"]
The reason why running it from root by hand and creating a token in root didn't solve the problem, is because those changes aren't saved. Apparently every time you close the image it forgets everything. Live and learn.

How to run my python script on docker?

I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.

Pass Environment Variables from Shippable to Docker

I am using Shippable for two reasons: to automate the build of my docker images and to pass encrypted environment variables. I am able to automate the builds but I can't pass the variables.
I start with entering the environment variable to the Shippable text box in the project settings:
SECRET_KEY=123456
I click the 'encrypt' button and then shippable returns:
- secure : hash123abc...
I put this hash into my shippable.yml file. It looks like:
language: python
python:
- 2.7
build_image: myusername/myimagename
env:
- secure : hash123abc...
build:
post_ci:
- docker login -u myusername -p mypassword
- docker build -t myusername/myimagename:latest .
- docker push myusername/myimagename:latest
integrations:
hub:
- integrationName : myintegrationname
type: docker
branches:
only:
- master
The automated build works! But if I try:
sudo docker run myusername/myimagename:latest echo $SECRET_KEY
I get nothing.
My Dockerfile which sets the environment variables (in this case SECRET_KEY) looks like this:
FROM python:2.7.11
RUN apt-get update
RUN apt-get install -y git
RUN get clone https://github.com/myusername/myrepo.git
ENV SECRET_KEY=$SECRET_KEY
It might be helpful to explain MY logic as I see it. Because my thinking may be the issue if it's not in the code:
The shippable project build is triggered (by a repo push or manually). In shippable.yml it does some things:
builds the initial image
sets the SECRET_KEY environment variable
builds the new image based on the Dockerfile
the Dockerfile:
-- sets the env variable SECRET_KEY to the SECRET_KEY set by the .yml two steps earlier
pushes the image
I'm thinking that now I've set an environment variable in my image I can now access it. But I get nothing. What's the issue here?
Thanks #Alex Hall for working this out with me!
It turns out that passing environment variables with Docker in this setting must be done with a simple flag to start. So in my shippable.yml I changed:
- docker build -t myusername/myimagename:latest .
to
- docker build --build-arg SECRET_KEY=$SECRET_KEY -t myusername/myimagename:latest .
Then in my Dockerfile I added:
ARG SECRET_KEY
RUN echo $SECRET_KEY > env_file
Lo and behold the key was in env_file

Categories