Scripts within Python on current directory - python

So, I'm in a bit of a particular situation and i'm trying to find a clean solution.
Currently we've got 18 different repos, all with python deployment utilities copy and pasted 18 times with venv... to me this is disgusting.
I'd like to bake those utilities into some kind of "tools" docker image, and just execute them wherever i need, instead of having to have each folder install all the dependencies 18 times.
/devtools/venv
/user-service/code
/data-service/code
/proxy-service/code
/admin-service/code
Ultimately I'd like to CD into user-service, and run a command similar to docker run tools version_update.py -- and have the docker image mount user-service's code and run the script against it.
How would I do this, and is there a better way i'm not seeing?

Why use docker?
I would recommend placing your scripts into a "tools" directory alongside your services (or wherever you see fit), then you can cd into one of your service directories and run python ../tools/version_update.py.

It would depend on your docker image, but here is the basic concept.
In your docker image, lets say we have a /code directory where we will mount the source code that we want to do work on, and a /tools directory with all of our scripts.
We can then mount what ever directory we want into the /code directory in the docker image and run what ever script we want. The working directory inside of the container would be set to /code and the path would also have /tools in it. So using your example, the docker run commands would look like this.
docker run -v /user-service/code:/code tools version_update.py
This would run the tools docker image, mount the local /user-service/code directory to /code directory in the container, and then run the version_update.py script on that code. and then exit.
The same image can be used for all the other projects as well, just change the mount point. (assuming they all have the same structure)
docker run -v /data-service/code:/code tools version_update.py
docker run -v /proxy-service/code:/code tools version_update.py
docker run -v /admin-service/code:/code tools version_update.py
And if you want to run a different tool, just change the command that you pass in.
docker run -v /user-service/code:/code tools other_command.py

Related

How to import a dockerized common module from a different docker in python

I have a python file containing generic functions named utils.py and another set of programs say, pgm1.py, pgm2.py, pgm3.py which imports utils.py and invokes it's functions eg: utils.send_email(), utils.time_convert() etc..
My requirement is to dockerize utils, pgm1, pgm2, and pgm3 in different containers and still be able to access the generic functions.
Can someone tell me how this can be achieved
You have to declare the dependency using the normal Python packaging tools (in your setup.cfg file, or using a tool like Pipfile or Poetry) and import it into each image.
A Docker image contains an application, plus all of its dependencies. A container can't access the files, libraries, or applications in other containers. So it doesn't make sense to have "a container of generic functions" that's not running an application, and you can't "import libraries from another container"; since the filesystems are isolated from each other, one container can't access the *.py files in another.
As you've described the problem, you're dealing with a small number of individual files, and the size of the Python interpreter will be much larger than any individual script. In this case it's fine to create an image that includes all of them
FROM python:3.10
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY ./ ./
CMD ["./pgm1.py"] # for example; do not use ENTRYPOINT here
You can then easily override that CMD when you run the container to select a different program.
docker run -d --name program-1 my-image ./pgm1.py
docker run -d --name program-2 my-image ./pgm2.py

Docker image from scratch and entrypoint

I need to create a docker image with my flask program as small as possible.
Due to this, I have compiled by pyinstaller my flask program and I want to create a docker image:
Dockerfile:
FROM scratch
COPY ./source/flask /
COPY ./source/libm-2.31.so /usr/lib/x86_64-linux-gnu/
ENTRYPOINT ["/flask"]
After running container I have error:
standard_init_linux.go:228: exec user process caused: no such file or directory
Source code can be downloaded here.
Please help.
As pointed out by #BMitch, scratch is not even a minimal image, it is a pseudo-image containing nothing. An empty directory comes as a close resemblance. It is useful when your application is a single binary or in case you want to build your own Linux from scratch.
Since your application is written in python, it requires some things you can find in an operating system, like interpreter for example. Therefore, unless you want to spend weeks building everything from scratch, it will be better to use a regular Linux OS image. Pick debian, ubuntu or centos and you should be fine with it.
Note on Alpine images
There are also alpine images, famous for their low size. For now I recommend against using Alpine Linux if you are going to use pip. Many packages on PyPI have binary wheels, which significantly speed up building time. Until PEP 656 was accepted (17/Apr/21) there were absolutely no wheels for Alpine Linux, meaning that every package that you use - you compile from scratch. This is because Alpine uses musl C compiler, while most other Linux distributions glibc.
What's inside scratch
Though by itself there is nothing, there are some things mounted by Docker at runtime. If you are curious what are these things, here is a Dockefile that adds ls to the image:
FROM busybox as b
FROM scratch
COPY --from=b /bin/ls /bin/ls
ENTRYPOINT ["/bin/ls"]
Once you've built it, you can use ls to explore:
❯ docker build . -t scr
❯ docker run --rm scr /
bin
dev
etc
proc
sys
❯ docker run --rm scr /bin
ls
❯ docker run --rm scr /dev
core
fd
full
mqueue
null
ptmx
pts
random
shm
stderr
stdin
stdout
tty
urandom
zero

what is a bare-bones Dockerfile/docker-compose.yml to run python scripts (with specific versions of python/packages)

My laptop (Macbook) pre-installed an old version of Python (2.7).
I have a couple of different python scripts task1.py and task2.py that require Python 3.7 and pip install some_handy_python_package
Several online sources say updating the system-wide version of Python on a Macbook could break some (unspecified) legacy apps.
Seems like a perfect use-case for Docker, to run some local scripts with a custom Python setup, but I do not find any online examples for this simple use case:
Laptop hosts folder mystuff has two scripts task1.py and task2.py (plus a Dockerfile and docker-compose.yml file)
Create a docker image with python 3.7 and whatever required packages, eg pip install some_handy_python_package
Can run any local-hosted python scripts from inside the docker container
perhaps something like docker run -it --rm some-container-name THEN at a bash prompt 'inside' docker run the script(s) via python task1.py
or perhaps something like docker-compose run --rm console python task1.py
I assume the Dockerfile starts off something like this:
FROM python:3.7
RUN pip install some_handy_python_package
but I'm not sure what to add to either the Dockerfile or a docker-compose.yml file so I can either a) run in Docker a bash prompt that lets me run python task1.py, or b) lets me define a 'console' service that can invoke python task1.py from the command line
In case it helps someone else, here's a kind of basic example how to run some local-folder python scripts inside Dockerized python environment. (A better example would setup a volume share within the Dockerfile.)
cd sc2
pwd # /Users/thisisme/sc2` -- you use this path later, when run docker, to set a volume share
Create Dockerfile
# Dockerfile
FROM python:3.7
RUN pip install some_package
Build the container, named rp in this example:
docker build -t rp .
In the local folder, create some python scripts, for example: task1.py
# task1.py
from some_package import SomePackage
# do some stuff
In the local folder, run the script in the container by creating a app share point:
docker run --rm -v YOUR_PATH_TO_FOLDER:/app rp python /app/task1.py
Specifically:
docker run --rm -v /Users/thisisme/sc2:/app rp python /app/task1.py
And sometimes it is handy to run the python interpreter in the container while developing code:
docker run -it --rm rp1
>>> 2 + 2
4
>>>

Creating custom python docker images

I have a python code for which I want to create a docker image. Now as per my understanding we need a Dockerfile and our python code code.py. Inside a Dockerfile we need to write:
FROM python:3
ADD code.py /
RUN pip3 install imapclient
CMD [ "python", "./code.py" ]
My first question is about this Dockerfile. First we have mentioned FROM python:3 because we want to use python3. Next we have added our code. In RUN we can write a dependency of our code. So for example if our code need python package imapclient we can mention it here so that it will be installed before docker file is build. But what if our code do not have any requirements.? Is this line RUN important. Can we exclude it when we don't need it.?
So now let's say we have finally created our docker image python-hello-world by using command docker build -t python-hello-world .. I can see it using command docker images -a. Now when I do docker ps, it is not listed there because the container is not running. Now to start it, I'll have to do docker run python-hello-world. This will start the code. But I want it to be running always in the background like a Linux service. How to do that.?
Is this line RUN important? Can we exclude it when we don't need it?
Yes if your code doesn't need the packages then you can exclude it.
But I want it to be running always in the background like a Linux service. How to do that?
If you want to run it as background then use below command.
docker run -d --restart=always python-hello-world
This will start container in background and will start automatically when system reboots.

How do you iteratively develop with docker?

How does one iteratively develop their app using Docker? I have only just started using it and my workflow is very slow, so I'm pretty sure I'm using it wrong.
I'm following along with a python machine learning course on Youtube, and so I am using Docker to work with python 3. I know I can use virtualenv or a VM, but I want to learn Docker as well so bear with me.
My root directory looks like so:
Dockerfile main.py*
My docker file:
FROM python
COPY . /src
RUN pip install quandl
RUN pip install pandas
CMD ["python", "/src/main.py"]
And the Python file:
#!/usr/bin/env python
import pandas as pd
import quandl
print("Hello world from main.py")
df = quandl.get("WIKI/GOOGL")
print("getting data frame for WIKI/GOOGL")
print(df.head())
My workflow has been:
Learn something new from the tutorial
Update python file
Build the docker image: docker build -t myapp .
Run the app: docker run my app python /src/main.py
Questions:
How can I speed this all up? For every change I want to try, I end up rebuilding. This causes pip to get dependencies each time which takes way too long.
Instead of editing a python file and running it, how might a get an interactive shell from the python version running in the container?
If I wanted my program to write out a file, how could I get this file back to my local system from the container after the program has finished?
Thanks for the help!
Edit:
I should add, this was the tutorial I was following in general to run some python code in Docker: https://www.civisanalytics.com/blog/using-docker-to-run-python/
Speeding up the rebuild process
The simplest thing you can do is reorder your Dockerfile.
FROM python
RUN pip install quandl
RUN pip install pandas
COPY . /src
CMD ["python", "/src/main.py"]
The reason this helps is that Docker will re-use the cached build for commands it has already run. Now when you rebuild after modifying your source code, it will re-use the build results for the pip commands, as they do not need to be run again. It will only run the COPY step.
Getting a python shell
You can exec a shell in the running container and run your python command.
docker exec -it <container-id> bash
python <...>
Or, you can run a container with just a shell, and skip running your app entirely (then, run it however you want).
docker run -it <image> bash
python <...>
Writing outside the container
Mount an external directory into the container. Then write to the mounted path.
docker run -v /local/path:/path <.. rest of command ..>
Then when you write in the container to /path/file, the file will show up outside the container at /local/path/file.

Categories