Docker container exibits different behavour when run automatically - python

I have a basic Python Docker container that uses the O365 library to retrieve mail from Office365.
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
ADD ./main ./main
CMD [ "python", "./main/main.py"]
The first time you run this O365 library, you need to authorize it and it stores a o365_token.txt which it uses after that. That looks like this:
Visit the following url to give consent:
https://login.microsoftonline.com/common/oauth2/v2.0/authorize?resp....
Paste the authenticated url here:
This also happened on my new Docker so I logged in to it through Bash:
docker run -it hvdveer/e2t-python bash
But now when I manually run it, it just utilizes the existing token and it works without verification. Deleting the token files and authorizing it again also doesn't work. Why does it ask for authorization when I run it automatically, but not when run it manually? Are these different users? How to I fix this?

I fixed it!
The CMD is run from the root directory so it's looking for the token there. By changing the WORKDIR to the main of my program it now finds the token:
FROM python:3
ADD requirements.txt ./
RUN pip install -r requirements.txt
WORKDIR /main
ADD ./main .
CMD [ "python", "./main.py"]
The reason why running it from root by hand and creating a token in root didn't solve the problem, is because those changes aren't saved. Apparently every time you close the image it forgets everything. Live and learn.

Related

Folder created in one step of DockerFile is not available in later steps

I am trying to dockerize a flask application but before I spin up the server I need to make sure some of the files are available. These files are stored in Google Cloud Storage. A script is written that will fetch the data from GCP and store it in a folder. The final step is to run app.py but the created folder is not present during this step. Can anyone tell me what I am missing?
Here is my DockerFile
FROM python:3.8-slim-buster
WORKDIR /backend
# All python packages needed for this project are put in this file
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
# This script currently connects to GCP cloud storage
# fetch the required file
# create a folder and store the fetched file in that folder
RUN python3 setup.py
CMD ["python3", "-m", "flask", "run", "--host=0.0.0.0"]

Streamlit showing me "Welcome to Streamlit" message when executing it with Docker

I'm trying to run a Docker container created from this Dockerfile
FROM selenium/standalone-chrome
WORKDIR /app
# Install dependencies
USER root
RUN apt-get update && apt-get install python3-distutils -y
RUN wget https://bootstrap.pypa.io/get-pip.py
RUN python3 get-pip.py
COPY requirements.txt ./requirements.txt
RUN pip install -r requirements.txt
RUN pip install selenium==4.1
# Copy src contents
COPY /src /app/
# Expose the port
EXPOSE 8501
# Execution
ENTRYPOINT [ "streamlit", "run" ]
CMD ["app.py"]
Building this container is possible, but when I execute the image, I obtain the following message:
👋 Welcome to Streamlit!
If you're one of our development partners or you're interested in getting
personal technical support or Streamlit updates, please enter your email
address below. Otherwise, you may leave the field blank.
Email: 2022-06-06 09:20:27.690
And, therefore, I am not able to press enter and continue the execution, as the execution halts. Do you guys know how should I make my Dockerfile to directly execute the streamlit run command and surpass this problem?
That welcome message is displayed when there does not exist a ~/.streamlit/credentials.toml file with the following content:
[general]
email=""
You can either create the above file (.streamlit/credentials.toml) within your app directory and copy its content to the container image in your Dockerfile or create this file using RUN commands on the following:
mkdir -p ~/.streamlit/
echo "[general]" > ~/.streamlit/credentials.toml
echo "email = \"\"" >> ~/.streamlit/credentials.toml
I would suggest the former approach to reduce the number of layers and thereby reduce the final image size.

Docker image having apscheduler does not run at all

I am new to docker, so bear with me on this.
I have a app.py file, which simply uses apscheduler to print a sentence on the console. I have followed the structure from the official guide for the python file. When I run the file on my console, it runs as expected. (prints the Tick statement every 10 seconds.)
Now, I want to dockerize it and upload the image to dockerhub. I followed the docker documentations and this is how my DockerFile looks like:
FROM python:3
COPY requirements.txt .
COPY app.py .
RUN pip install --trusted-host pypi.python.org -r requirements.txt
CMD [ "python", "app.py" ]
I have listed the module names in requirements.txt as below:
datetime
apscheduler
The folder is flat. app.py and requirements.txt are in the same level in the directory.
|
|- app.py
|- requirements.txt
I use below commands to build the docker image:
docker build . -t app1:ver3
The docker image builds successfully and shows up when I do
docker images
Problem is, when I run the docker image with
docker run app1:ver3
the image does not show any output.
In fact the image shows as listed when I do docker ps - which is expected but the run command should show me print statements on the console every 10 seconds.
There are two things here
You need to use docker run -it app1:ver3
-i: Interactive mode
-t: Enable TTY
I believe just -t also may do the job. See the link below for details
https://docs.docker.com/engine/reference/run/

Docker- Do we need to include RUN command in Dockerfile

I have a python code and to convert it to docker image, I can use below command:
sudo docker build -t customdocker .
This converts python code to docker image. To convert it I use a Dockerfile with below commands:
FROM python:3
ADD my_script.py /
ADD user.conf /srv/config/conf.d/
RUN pip3 install <some-package>
CMD [ "python3", "./my_script.py" ]
In this, we have RUN command which install required packages. Lets say if we have deleted the image for some reason and want to build it again, so is there any way we can skip this RUN step to save some time because I think this is already installed.
Also in my code I am using a file user.conf which is in other directory. So for that I am including this in DOckerfile and also saving a copy of it in current directory. Is there a way in docker where I can define my working directory so that docker image searches for the file inside those directories.
Thanks
Yes you cannot remove the RUN or other statements in dockerfile, if you want to build the docker image again after deleteing.
You use the command WORKDIR in your dockerfile but its scope will be within the docker images, i.e when you create the container from the image workdir will be set to that metioned in WORKDIR
For ex :
WORKDIR /srv/config/conf.d/
This /srv/config/conf.d/ will set as workingdir, but you have to use below in dockerfile while building in-order to copy that file in specified location
ADD user.conf /srv/config/conf.d/
Answering your first question: A docker image holds everything related to your python environment including the packages you install. When you delete the image then the packages are also deleted from the image. Therefore, no you cannot skip that step.
Now on to your second question, you can bind a direectory while starting the container by:
docker run -v /directory-you-want-to-mount:/src/config/ customdocker
You can also set the working directory with -w flag.
docker run -w /path/to/dir/ -i -t customdocker
https://docs.docker.com/v1.10/engine/reference/commandline/run/

Access ssh keys during docker-compose build in flask project

Generally, my question is about being able to access ssh keys during docker-compose build.
I'm able to access my ssh keys when running docker-compose up using volume mapping in my docker-compose.yml file, looks like:
services:
flask:
volumes:
- ~/.ssh:/root/.ssh
But I cannot access them during docker-compose build
More Specifics
I am running a python flask app. I want to install a private git repo as a pip package. So I added this line to requirements.txt
git+ssh://git#github.com/username/repo_name.git#branch_name
If I run bash in the service through docker-compose run flask bash then I can manually run pip install git+ssh://git#github.com/username/repo_name.git#branch_name and that works, because I have the volume mapping to the ssh keys.
But when I run docker-compose build, it cannot access the private git repo because it doesn't have access to the ssh keys.
Anyone know if there's a way to give docker-compose build access to ssh keys, or another way around this problem?
volumes are attached at run time of your container, NOT at build time.
Solution:
Copy your .ssh next to your Dockerfile and do the following in your Dockerfile:
COPY ./.ssh /root/.ssh
Be careful:
Like this, your .ssh directory will be available for everyone who has access to your Docker image. So either create a technical user and copy his .ssh into the image or (better) do something like this:
FROM baseimage AS builder
COPY ./.ssh /root/.ssh
RUN your commands
FROM baseimage
COPY --from=builder some-directory some-directory
Edit:
Another option is to use username:password instead of ssh key authentication. This way, you would use build args in your Dockerfile like:
FROM baseimage
ARG GIT_USER
ARG GIT_PASS
RUN git clone http://${GIT_USER}:${GIT_PASS}#your-git-url.git
and build it with args docker build --build-args GIT_USER=<user> --build-args GIT_PASS=<pass> .
ssh is harder to setup than just using the username:password. Here is the line I added to requirements.txt that got it to work:
-e git+https://<username>:<password>#github.com/<path to git repo>.git#egg=<package_name>
If you want to get a specific tag, then you can do this:
-e git+https://<username>:<password>#github.com/<path to git repo>.git#<tag_name>#egg=<package_name>
You can use any username and password that has access to the git repo, but I recommend that you don't use your main git account for obvious security reasons. Create a new user specifically for this project, and grant them access rights.

Categories