Docker image from scratch and entrypoint - python

I need to create a docker image with my flask program as small as possible.
Due to this, I have compiled by pyinstaller my flask program and I want to create a docker image:
Dockerfile:
FROM scratch
COPY ./source/flask /
COPY ./source/libm-2.31.so /usr/lib/x86_64-linux-gnu/
ENTRYPOINT ["/flask"]
After running container I have error:
standard_init_linux.go:228: exec user process caused: no such file or directory
Source code can be downloaded here.
Please help.

As pointed out by #BMitch, scratch is not even a minimal image, it is a pseudo-image containing nothing. An empty directory comes as a close resemblance. It is useful when your application is a single binary or in case you want to build your own Linux from scratch.
Since your application is written in python, it requires some things you can find in an operating system, like interpreter for example. Therefore, unless you want to spend weeks building everything from scratch, it will be better to use a regular Linux OS image. Pick debian, ubuntu or centos and you should be fine with it.
Note on Alpine images
There are also alpine images, famous for their low size. For now I recommend against using Alpine Linux if you are going to use pip. Many packages on PyPI have binary wheels, which significantly speed up building time. Until PEP 656 was accepted (17/Apr/21) there were absolutely no wheels for Alpine Linux, meaning that every package that you use - you compile from scratch. This is because Alpine uses musl C compiler, while most other Linux distributions glibc.
What's inside scratch
Though by itself there is nothing, there are some things mounted by Docker at runtime. If you are curious what are these things, here is a Dockefile that adds ls to the image:
FROM busybox as b
FROM scratch
COPY --from=b /bin/ls /bin/ls
ENTRYPOINT ["/bin/ls"]
Once you've built it, you can use ls to explore:
❯ docker build . -t scr
❯ docker run --rm scr /
bin
dev
etc
proc
sys
❯ docker run --rm scr /bin
ls
❯ docker run --rm scr /dev
core
fd
full
mqueue
null
ptmx
pts
random
shm
stderr
stdin
stdout
tty
urandom
zero

Related

DOCKER Python Path

I am new to Docker. 1 Thing is confusing me, when I print the output of which python in Docker, it points towards the system's python, even though I have mentioned the python3.7 as a base image, so it should be pointing towards image python right?
My docker file is as follows:
FROM python:3.7
RUN which python3
RUN which python3.7
RUN which python
The output is
Sending build context to Docker daemon 2.048kB
Step 1/4 : FROM python:3.7
---> 7c891de3e220
Step 2/4 : RUN which python3
---> Running in bfcab000b493
/usr/local/bin/python3
Removing intermediate container bfcab000b493
---> be30731a0a5a
Step 3/4 : RUN which python3.7
---> Running in 144cf28963eb
/usr/local/bin/python3.7
Removing intermediate container 144cf28963eb
---> 7434c6aa69cb
Step 4/4 : RUN which python
---> Running in 88e3133f4e41
/usr/local/bin/python
Removing intermediate container 88e3133f4e41
---> 872bfb66fc7d
Successfully built 872bfb66fc7d
Successfully tagged docker_testing:latest
You can see that it is responding all the python to /usr/bin/pythonx
I want to ask if this is the Docker's Python that is being used or not, or is it using my system's Python.
Thanks.
When you run RUN which python3 that is path in your docker image not your system path, you can try RUN touch /usr/bin/testtesttest and check in your system.
Data in docker container can storage in your system only when you mount
No the Docker is fetching python image from the docker image link
It will not use your system python
You can see that it is responding all the python to /usr/bin/pythonx
this is because the docker is creating a separate system for the docker image. and this path is from that system.

How can I identify the source of a volume inside a docker container?

Tl;Dr: is there a way to check if a volume /foo built into a docker image has been overwritten at runtime by remounting using -v host/foo:/foo?
I have designed an image that runs a few scripts on container initialization using s6-overlay to dynamically generate a user, transfer permissions, and launch a few services under that uid:gid.
One of the things I need to do is pip install -e /foo a python module mounted at /foo. This also installs a default version of /foo contained in the docker image if the user doesn't specify a volume. The reason I am doing this install at runtime is because this container is designed to contain the entire environment for development and experimentation of foo, so if a user mounts a system version of foo, e.g. -v /home/user/foo:/foo, the user can develop by updating host:/home/user/foo or in container:/foo and all changes will persist and the image won't need to be rebuilt to get new changes. It needs to be an editable install so that new changes don't require reinstallation.
I have this working now.
I would like to speed up container initialization by moving this pip install into the image build, and then only install the module at runtime if the user has mounted a new /foo at runtime using -v /home/user/foo:/foo.
Of course, there are other ways to do this. For example, I could build foo into the image copying it to /bar at build time and install foo using pip install /bar... Then at runtime just check if /foo exists and if it doesn't then create a symlink /foo->/bar. If it does exist then pip uninstall foo and pip install -e /foo.. but this isn't the cleanest solution. I could also just mv /bar /foo at runtime if /foo doesn't exist.. but I'm not sure how pip will handle the change in module path.
The only way to do this, that I can think of, is to map the docker socket into the container, so you can do docker inspect from inside the container and see the mounted volumes. Like this
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock docker
and then inside the container
docker inspect $(hostname)
I've used the 'docker' image, since that has docker installed. You can, of course, use another image. You just have to install docker in it.

How to install a python module in a docker container

My Docker knowledge is very poor, I have Docker installed only because I would use freqtrade, so I followed this simple HOWTO
https://www.freqtrade.io/en/stable/docker_quickstart/
Now , all freqtrade commands run using docker , for example this
D:\ft_userdata\user_data>docker-compose run --rm freqtrade backtesting --config user_data/cryptofrog.config.json --datadir user_data/data/binance --export trades --stake-amount 70 --strategy CryptoFrog -i 5m
Well , I started to have problems when I would had try this strategy
https://github.com/froggleston/cryptofrog-strategies
for freqtrade . This strategy requires Python module finta .
I understood that the Python module finta should be installed in my Docker container
and NOT in my Windows system (it was easy "pip install finta" from console!).
Even if I tried to find a solution over stackoverflow and google, I do no understand how to do this step (install finta python module in freqtrade container).
after several hours I am really lost.
Someone can explain me in easy steps how to do this ?
Freqtrade mount point is
D:\ft_userdata\user_data
You can get bash from your container with this command:
docker-compose exec freqtrade bash
and then:
pip install finta
OR
run only one command:
docker-compose exec freqtrade pip install finta
If the above solutions didn't work, You can run docker ps command and get container id of your container. Then
docker exec -it CONTAINER_ID bash
pip install finta
You need to make your own docker image that has finta installed. Luckily you can build on top of the standard freqtrade docker image.
First make a Dockerfile with these two lines in it
FROM freqtradeorg/freqtrade:stable
RUN pip install finta
Then build the image (calling the new image myfreqtrade) by running the command
docker build -t myfreqtrade .
Finally change the docker-compose.yml file to run your image by changing the line
image: freqtradeorg/freqtrade:stable
to
image: myfreqtrade
And that should be that.
The way to get our Python code running in a container is to pack it as a Docker image and then run a container based on it.
To generate a Docker image we need to create a Dockerfile that contains instructions needed to build the image. The Dockerfile is then processed by the Docker builder which generates the Docker image. Then, with a simple docker run command, we create and run a container with the Python service.
An example of a Dockerfile containing instructions for assembling a Docker image for Python service installing finta is the following
# set base image (host OS)
FROM python:3.8
# install dependencies
RUN pip install finta
# command to run on container start
CMD [ "python", "-V" ]
For each instruction or command from the Dockerfile, the Docker builder generates an image layer and stacks it upon the previous ones. Therefore, the Docker image resulting from the process is simply a read-only stack of different layers.
docker build -t myimage .
Then, we can check the image is in the local image store:
docker images
Please refer to the freqtrade DockerFile https://github.com/freqtrade/freqtrade/blob/develop/Dockerfile

Docker Python Image

I've a RHEL host with docker installed, it has default Py 2.7. My python scripts needs a bit more modules which
I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
Now, I am trying to get a python in docker container where I get to add few modules do the needfull.
Issue - docker installed RHEL is not connected to internet and cant be connected as well
The laptop i have doesnt have the docker either and I can't install docker here (no admin acccess) to create the docker image and copy them to RHEL host
I was hoping if docker image with python can be downloaded from Internet I might be able to use that as is!,
Any pointers in any approprite direction would be appreciated.
what have I done - tried searching for the python images, been through the dockers documentation to create the image.
Apologies if the above question sounds silly, I am getting better with time on docker :)
If your environment is restricted enough that you can't use sudo to install packages, you won't be able to use Docker: if you can run any docker run command at all you can trivially get unrestricted root access on the host.
My python scripts needs a bit more modules which I can't install due to lack of sudo access & moreover, I dont want to screw up with the default Py which is needed for host to function.
That sounds like a perfect use for a virtual environment: it gives you an isolated local package tree that you can install into as an unprivileged user and doesn't interfere with the system Python. For Python 2 you need a separate tool for it, with a couple of steps to install:
export PYTHONUSERBASE=$HOME
pip install --user virtualenv
~/bin/virtualenv vpy
. vpy/bin/activate
pip install ... # installs into vpy/lib/python2.7/site-packages
you can create a docker image on any standalone machine and push the final required image to docker registry ( docker hub ). Then in your laptop you can pull that image and start working :)
Below are some key commands that will be required for the same.
To create a image, you will need to create a Dockerfile with all the packages installed
Or you can also do sudo docker run -it ubuntu:16.04 then install python and other packages as required.
then sudo docker commit container_id name
sudo docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
sudo docker push IMAGE_NAME
Then you pull this image in your laptop and start working.
You can refer to this link for more docker commands https://github.com/akasranjan005/docker-k8s/blob/master/docker/basic-commands.md
Hope this helps. Thanks

Scripts within Python on current directory

So, I'm in a bit of a particular situation and i'm trying to find a clean solution.
Currently we've got 18 different repos, all with python deployment utilities copy and pasted 18 times with venv... to me this is disgusting.
I'd like to bake those utilities into some kind of "tools" docker image, and just execute them wherever i need, instead of having to have each folder install all the dependencies 18 times.
/devtools/venv
/user-service/code
/data-service/code
/proxy-service/code
/admin-service/code
Ultimately I'd like to CD into user-service, and run a command similar to docker run tools version_update.py -- and have the docker image mount user-service's code and run the script against it.
How would I do this, and is there a better way i'm not seeing?
Why use docker?
I would recommend placing your scripts into a "tools" directory alongside your services (or wherever you see fit), then you can cd into one of your service directories and run python ../tools/version_update.py.
It would depend on your docker image, but here is the basic concept.
In your docker image, lets say we have a /code directory where we will mount the source code that we want to do work on, and a /tools directory with all of our scripts.
We can then mount what ever directory we want into the /code directory in the docker image and run what ever script we want. The working directory inside of the container would be set to /code and the path would also have /tools in it. So using your example, the docker run commands would look like this.
docker run -v /user-service/code:/code tools version_update.py
This would run the tools docker image, mount the local /user-service/code directory to /code directory in the container, and then run the version_update.py script on that code. and then exit.
The same image can be used for all the other projects as well, just change the mount point. (assuming they all have the same structure)
docker run -v /data-service/code:/code tools version_update.py
docker run -v /proxy-service/code:/code tools version_update.py
docker run -v /admin-service/code:/code tools version_update.py
And if you want to run a different tool, just change the command that you pass in.
docker run -v /user-service/code:/code tools other_command.py

Categories