DOCKER Python Path - python

I am new to Docker. 1 Thing is confusing me, when I print the output of which python in Docker, it points towards the system's python, even though I have mentioned the python3.7 as a base image, so it should be pointing towards image python right?
My docker file is as follows:
FROM python:3.7
RUN which python3
RUN which python3.7
RUN which python
The output is
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM python:3.7
---> 7c891de3e220
Step 2/4 : RUN which python3
---> Running in bfcab000b493
/usr/local/bin/python3
Removing intermediate container bfcab000b493
---> be30731a0a5a
Step 3/4 : RUN which python3.7
---> Running in 144cf28963eb
/usr/local/bin/python3.7
Removing intermediate container 144cf28963eb
---> 7434c6aa69cb
Step 4/4 : RUN which python
---> Running in 88e3133f4e41
/usr/local/bin/python
Removing intermediate container 88e3133f4e41
---> 872bfb66fc7d
Successfully built 872bfb66fc7d
Successfully tagged docker_testing:latest
You can see that it is responding all the python to /usr/bin/pythonx
I want to ask if this is the Docker's Python that is being used or not, or is it using my system's Python.
Thanks.

When you run RUN which python3 that is path in your docker image not your system path, you can try RUN touch /usr/bin/testtesttest and check in your system.
Data in docker container can storage in your system only when you mount

No the Docker is fetching python image from the docker image link
It will not use your system python
You can see that it is responding all the python to /usr/bin/pythonx
this is because the docker is creating a separate system for the docker image. and this path is from that system.

Related

Why does container stop when closing VSCode window although "shutdownAction" is set to "none"?

I use VSCode 1.63.2 to ssh into a remote machine with Ubuntu 20.04, to then work on a project inside a Docker container. Whenever I close a VSCode window while executing a Python script in the container, it stops all terminal processes. When I reattach to the container, I see a Python terminal showing Session contents restored from <date> at <time> and the script's outputs up to the moment I deconnected from the container. However, I would like the container to just keep going when I close VSCode or shut down my local computer.
Things I tried so far: First, I cloned my GitHub repo in the remote machine and built a Docker image with the following Dockerfile
FROM python:3.8-bullseye
RUN pip install -U pip setuptools wheel &&\
useradd -m -r fabioklr
WORKDIR /home/fabioklr/masterthesis
RUN chown -R fabioklr .
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
ARG GIT_HASH
ENV GIT_HASH=${GIT_HASH:-dev}
USER fabioklr
RUN git config --global init.defaultBranch main &&\
git init &&\
git remote add origin <url-to-remote-repo>
Then I ran docker build . for the image, docker run -dit <image-name:tag> /bin/bash to spin up the container, and I attached VSCode to the container with the Remote-Containers: Attach to Running Container command.
Second, I tried it without a custom Dockerfile and without the command line. I opened my project folder on the remote machine, chose the Remote-Containers: Open Folder in Container command and a Python 3 base image from the command palette. VSCode did the rest automatically, but still I encountered the same problem.
Third, I tried it with the same Open Folder in Container command but using the Dockerfile from above and a custom devcontainer.json file, where I specify "shutdownAction: "none" because it says in the VSCode Docs that this setting should prevent my problem.
Indicates whether VS Code and other devcontainer.json supporting tools should stop the containers when the related tool window is closed / shut down.
Values are none, stopContainer (default for image or Dockerfile), and stopCompose (default for Docker Compose).
I managed to work around this issue with VSCode thanks to this post by using nohup, but it is not ideal for my workflow. Plus, the problem is particularly strange because I did not encounter it a few weeks ago. Am I missing something or is this an issue? Thanks!
Plus, the problem is particularly strange because I did not encounter
it a few weeks ago.
Hi,
it sounds a bit like problem after upgrade.
Have you tried to downgrade ms-vscode-remote.remote-containers extension?
(right click -> install another version).
I am using v0.245.2 and "shutdownAction": "none" keeps my container running when VS Code is closed.

Docker image from scratch and entrypoint

I need to create a docker image with my flask program as small as possible.
Due to this, I have compiled by pyinstaller my flask program and I want to create a docker image:
Dockerfile:
FROM scratch
COPY ./source/flask /
COPY ./source/libm-2.31.so /usr/lib/x86_64-linux-gnu/
ENTRYPOINT ["/flask"]
After running container I have error:
standard_init_linux.go:228: exec user process caused: no such file or directory
Source code can be downloaded here.
Please help.
As pointed out by #BMitch, scratch is not even a minimal image, it is a pseudo-image containing nothing. An empty directory comes as a close resemblance. It is useful when your application is a single binary or in case you want to build your own Linux from scratch.
Since your application is written in python, it requires some things you can find in an operating system, like interpreter for example. Therefore, unless you want to spend weeks building everything from scratch, it will be better to use a regular Linux OS image. Pick debian, ubuntu or centos and you should be fine with it.
Note on Alpine images
There are also alpine images, famous for their low size. For now I recommend against using Alpine Linux if you are going to use pip. Many packages on PyPI have binary wheels, which significantly speed up building time. Until PEP 656 was accepted (17/Apr/21) there were absolutely no wheels for Alpine Linux, meaning that every package that you use - you compile from scratch. This is because Alpine uses musl C compiler, while most other Linux distributions glibc.
What's inside scratch
Though by itself there is nothing, there are some things mounted by Docker at runtime. If you are curious what are these things, here is a Dockefile that adds ls to the image:
FROM busybox as b
FROM scratch
COPY --from=b /bin/ls /bin/ls
ENTRYPOINT ["/bin/ls"]
Once you've built it, you can use ls to explore:
❯ docker build . -t scr
❯ docker run --rm scr /
bin
dev
etc
proc
sys
❯ docker run --rm scr /bin
ls
❯ docker run --rm scr /dev
core
fd
full
mqueue
null
ptmx
pts
random
shm
stderr
stdin
stdout
tty
urandom
zero

How to install a python module in a docker container

My Docker knowledge is very poor, I have Docker installed only because I would use freqtrade, so I followed this simple HOWTO
https://www.freqtrade.io/en/stable/docker_quickstart/
Now , all freqtrade commands run using docker , for example this
D:\ft_userdata\user_data>docker-compose run --rm freqtrade backtesting --config user_data/cryptofrog.config.json --datadir user_data/data/binance --export trades --stake-amount 70 --strategy CryptoFrog -i 5m
Well , I started to have problems when I would had try this strategy
https://github.com/froggleston/cryptofrog-strategies
for freqtrade . This strategy requires Python module finta .
I understood that the Python module finta should be installed in my Docker container
and NOT in my Windows system (it was easy "pip install finta" from console!).
Even if I tried to find a solution over stackoverflow and google, I do no understand how to do this step (install finta python module in freqtrade container).
after several hours I am really lost.
Someone can explain me in easy steps how to do this ?
Freqtrade mount point is
D:\ft_userdata\user_data
You can get bash from your container with this command:
docker-compose exec freqtrade bash
and then:
pip install finta
OR
run only one command:
docker-compose exec freqtrade pip install finta
If the above solutions didn't work, You can run docker ps command and get container id of your container. Then
docker exec -it CONTAINER_ID bash
pip install finta
You need to make your own docker image that has finta installed. Luckily you can build on top of the standard freqtrade docker image.
First make a Dockerfile with these two lines in it
FROM freqtradeorg/freqtrade:stable
RUN pip install finta
Then build the image (calling the new image myfreqtrade) by running the command
docker build -t myfreqtrade .
Finally change the docker-compose.yml file to run your image by changing the line
image: freqtradeorg/freqtrade:stable
to
image: myfreqtrade
And that should be that.
The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it.
To generate a Docker image we need to create a Dockerfile that contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.
An example of a Dockerfile containing instructions for assembling a Docker image for Python service installing finta is the following
# set base image (host OS)
FROM python:3.8
# install dependencies
RUN pip install finta
# command to run on container start
CMD [ "python", "-V" ]
For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.
docker build -t myimage .
Then, we can check the image is in the local image store:
docker images
Please refer to the freqtrade DockerFile https://github.com/freqtrade/freqtrade/blob/develop/Dockerfile

Docker Python Image

I've a RHEL host with docker installed, it has default Py 2.7. My python scripts needs a bit more modules which
I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
Now, I am trying to get a python in docker container where I get to add few modules do the needfull.
Issue - docker installed RHEL is not connected to internet and cant be connected as well
The laptop i have doesnt have the docker either and I can't install docker here (no admin acccess) to create the docker image and copy them to RHEL host
I was hoping if docker image with python can be downloaded from Internet I might be able to use that as is!,
Any pointers in any approprite direction would be appreciated.
what have I done - tried searching for the python images, been through the dockers documentation to create the image.
Apologies if the above question sounds silly, I am getting better with time on docker :)
If your environment is restricted enough that you can't use sudo to install packages, you won't be able to use Docker: if you can run any docker run command at all you can trivially get unrestricted root access on the host.
My python scripts needs a bit more modules which I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
That sounds like a perfect use for a virtual environment: it gives you an isolated local package tree that you can install into as an unprivileged user and doesn't interfere with the system Python. For Python 2 you need a separate tool for it, with a couple of steps to install:
export PYTHONUSERBASE=$HOME
pip install --user virtualenv
~/bin/virtualenv vpy
. vpy/bin/activate
pip install ... # installs into vpy/lib/python2.7/site-packages
you can create a docker image on any standalone machine and push the final required image to docker registry ( docker hub ). Then in your laptop you can pull that image and start working :)
Below are some key commands that will be required for the same.
To create a image, you will need to create a Dockerfile with all the packages installed
Or you can also do sudo docker run -it ubuntu:16.04 then install python and other packages as required.
then sudo docker commit container_id name
sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
sudo docker push IMAGE_NAME
Then you pull this image in your laptop and start working.
You can refer to this link for more docker commands https://github.com/akasranjan005/docker-k8s/blob/master/docker/basic-commands.md
Hope this helps. Thanks

Creating custom python docker images

I have a python code for which I want to create a docker image. Now as per my understanding we need a Dockerfile and our python code code.py. Inside a Dockerfile we need to write:
FROM python:3
ADD code.py /
RUN pip3 install imapclient
CMD [ "python", "./code.py" ]
My first question is about this Dockerfile. First we have mentioned FROM python:3 because we want to use python3. Next we have added our code. In RUN we can write a dependency of our code. So for example if our code need python package imapclient we can mention it here so that it will be installed before docker file is build. But what if our code do not have any requirements.? Is this line RUN important. Can we exclude it when we don't need it.?
So now let's say we have finally created our docker image python-hello-world by using command docker build -t python-hello-world .. I can see it using command docker images -a. Now when I do docker ps, it is not listed there because the container is not running. Now to start it, I'll have to do docker run python-hello-world. This will start the code. But I want it to be running always in the background like a Linux service. How to do that.?
Is this line RUN important? Can we exclude it when we don't need it?
Yes if your code doesn't need the packages then you can exclude it.
But I want it to be running always in the background like a Linux service. How to do that?
If you want to run it as background then use below command.
docker run -d --restart=always python-hello-world
This will start container in background and will start automatically when system reboots.

Categories