Mounting a directory within a docker build - python

I'm wondering how to use a simple script with a docker container.
The script is:
example python script
# Example python script
import argparse
import pathlib
def run(
*,
input: pathlib.Path | str,
output: pathlib.Path | str,
) -> None:
pathlib.Path(output).write_text(pathlib.Path(input).read_text().upper())
def main() -> int:
desc = "example script"
parser = argparse.ArgumentParser(
description=desc,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"-i",
"--input",
help=("input file"),
required=True,
)
parser.add_argument(
"-o",
"--output",
help=("output file"),
)
parser.add_argument(
"-x",
"--overwrite",
help=("Whether to overwrite previously created file."),
action="store_true",
)
args = parser.parse_args()
if not pathlib.Path(args.input).exists():
raise FileNotFoundError(f"input file {args.input} not found")
if not args.output:
raise argparse.ArgumentError(f"output not given")
if pathlib.Path(args.output).exists() and not args.overwrite:
raise FileExistsError(f"{args.output} already exists. ")
run(input=args.input, output=args.output)
if __name__ == "__main__":
raise SystemExit(main())
The script works fine on my system (without docker).
example docker file
The Dockerfile is:
FROM python:3.10.6-bullseye
COPY . .
ENTRYPOINT ["python", "example.py"]
This works (ish) after the following:
# build
docker build -t demo .
# run
docker run demo --help
Which outputs:
usage: example.py [-h] -i INPUT [-o OUTPUT] [-x]
example.
options:
-h, --help show this help message and exit
-i INPUT, --input INPUT
input file
-o OUTPUT, --output OUTPUT
output file
-x, --overwrite Whether to overwrite previously created file.
But I'm not sure how to use it with the -i and -o arguments.
what I'd like to do
I'd like to be able to do the following:
echo "text" > input.txt
# Create output from input
docker run demo -i input.txt -o output.txt
# Create output from input and say it's ok to overwrite
docker run demo -i input.txt -o output.txt -x
And after this there by a output.txt file created which has TEXT in it.
Error
I've tried to do this with the above command, and it doesn't work.
Eg:
echo "this" > input.txt
docker run demo -i input.txt -o output.txt -x
After this there is no output.txt file created which has THIS in it.
Attempted solution (--mount within the shell command)
Using the following seems to work - but it feels as though It's a lot in a shell command :
docker run \
--mount type=bind,source="$(pwd)",target=/check \
--workdir=/check demo:latest \
-i input.txt -o output.txt -x
Is there a way to do the --mount within the dockerfile itself?

I am doing a similar thing by running a compiler inside the docker container.
Obviously the docker image gets built whenever there is a new version of the compiler or the underlying image.
The container gets to run whenever I want to compile something. And here I have to mount source and target directories, but my docker command looks smaller than yours:
docker run --rm -v /sourcecode:/project:ro -v /compiled:/output:rw -v cache:/cache:rw compilerimagename
All the rest is defined within the image.

Related

bash shell commands run through python to be universal with windows, mac, and linux

I need to have bash shell commands run through python in order to be universal with pc and mac/linux. ./bin/production doesn't work in powershell and putting 'bash' in front would give an error that it doesn't recognize 'docker' command
./bin/production contents:
#!/bin/bash
docker run --rm -it \
--volume ${PWD}/prime:/app \
$(docker build -q docker/prime) \
npm run build
This is the python script:
import subprocess
from python_on_whales import docker
cmd = docker.run('docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build')
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This is the error I get when running the python script:
python_on_whales.exceptions.NoSuchImage: The docker command executed was C:\Program Files\Docker\Docker\resources\bin\docker.EXE image inspect docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build.
It returned with code 1
The content of stdout is '[]
'
The content of stderr is 'Error response from daemon: no such image: docker run --rm -it --volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build: invalid reference format: repository name must be lowercase
'
Running the command, docker run --rm -it--volume ${PWD}/prime:/app $(docker build -q docker/prime) npm run build in one long line in powershell works but we want a universal standard command for both pc and mac/linux
The Python on Whales docker.run() function doesn't take a docker run ... command line. It is a native Python API where you need to express the various Docker options as function parameters.
In principle you could rewrite this Python script using that API:
from pathlib import Path
from python_on_whales import docker
# build the image, returns an Image object
image = docker.build(Path.cwd() / 'docker' / 'prime')
# start the container; like `docker run ...`
docker.run(image,
command=['npm', 'run', 'build'],
volumes=[(Path.cwd() / 'prime', '/app')], # -v $(PWD)/prime:/app
interactive=True, # -i (required?)
tty=True, # -t (required?)
remove=True) # --rm
The return value from docker.run() (without detach=True) is the container's stdout, and the examples print() that data.
This might not be what you're looking for but you can always try this:
import platform
import subprocess
import os
cur_os = platform.system()
if cur_os == "Windows":
print("You are on windows")
os.system('Command here') # for windows
elif cur_os == "Darwin":
print("You are on mac")
subprocess.call('Command goes here') # for mac
Edit:
I'm intermediate with python so don't judge, if I did something wrong please give me feedback. Thanks.

Cannot reach device/ip when started by docker, but from shell inside the same container

I wan't to check whether a server/device is reachable or not, by using a python script running inside docker.
I'm using python 3.7.
The script looks like the following snipped (scriped down)
import platform
import subprocess
import asyncio
from argparse import ArgumentParser
from time import sleep
from models.configuration import Configuration
parser = ArgumentParser()
# device ip or hostname
parser.add_argument(
'-d', '--device',
type=str,
required=True,
)
async def main():
args = parser.parse_args()
configuration = Configuration(args) # the configuration object stores the arguments
param = '-n' if platform.system().lower() == 'windows' else '-c'
while True:
result = subprocess.call(['ping', param, '1', configuration.device])
print(f'## {result}')
# TODO: result = 0 => success, result > 0 => failure
sleep(5)
if __name__ == '__main__':
asyncio.run(main())
My Dockerfile:
FROM python:3.7
WORKDIR /usr/src/app
COPY . .
RUN pip install --no-cache-dir -r requierments.txt
ENTRYPOINT [ "python3", "./main.py", "-d IP_OR_HOSTNAME" ]
I also tried CMD instead of ENTRYPOINT.
I build and start the container with the following commands
docker build -t my-app .
docker run -it --network host --name my-app my-app
Running the script by docker, the ping command exits with the exit code 2 (Name or Service not known).
When I start the script from the shell inside the container (python3 /usr/src/app/main.py -d IP_OR_HOSTNAME), the device is reachable (exit code 0).
As far as I know I have to use the network mode host.
Any ideas why the script cannot reach the device when launched by docker, but from shell inside the container?
(I am open to suggestions for a better title)
The various Dockerfile commands that run commands have two forms. In shell form, without special punctuation, Docker internally runs a shell and the command is broken into words using that shell's normal rules. If you write the command as a JSON array, though, it uses an exec form and the command is executed with exactly the words you give it.
In your command:
ENTRYPOINT [ "python3", "./main.py", "-d IP_OR_HOSTNAME" ]
There are three words in the command: python3, ./main.py, and then as a single argument -d IP_OR_HOSTNAME including an embedded space. When the Python argparse module sees this, it interprets it as a single option with the -d short option and text afterwards, and so the value of the "hostname" option is IP_OR_HOSTNAME including a leading space.
There are various alternate ways to "spell" this that will have the effect you want:
# Split "-d" and the argument into separate words
ENTRYPOINT ["python3", "./main.py", "-d", "IP_OR_HOSTNAME"]
# Remove the space between "-d" and its option
ENTRYPOINT ["python3", "./main.py", "-dIP_OR_HOSTNAME"]
# Use the shell form to parse the command into words "normally"
ENTRYPOINT python3 ./main.py -d IP_OR_HOSTNAME

ENV in docker file not getting replaced

I have a very simple docker file
FROM python:3
WORKDIR /usr/src/app
ENV CODEPATH=default_value
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
Here is my container command
docker run -e TOKEN="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
when I see container logs it shows
python3: can't open file '/usr/src/app/${TOKEN}': [Errno 2] No such file or directory
It looks like what you want to do is override the default path to the python file which is run when you launch the container. Rather than passing this option in as an environment variable, you can just pass the path to the file as an argument to docker run, which is the purpose of CMD in your dockerfile. What you set as the CMD option is the default, which users of your image can easily override by passing an argument to the docker run command.
doker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest "subfolder/testmypython.py"
Environment variable name CODEPATH but your setting TOKEN as Environment variable.
could you please try setting CODEPATH as env in following way
doker run -e CODEPATH="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
The way you've split ENTRYPOINT and CMD doesn't make sense, and it makes it impossible to do variable expansion here. You should combine the two parts together into a single CMD, and then use the shell form to run it:
# no ENTRYPOINT
CMD python3 /usr/src/app/${CODEPATH}
(Having done this, better still is to use the approach in #allan's answer and directly docker run python-image python3 other-script-name.py.)
The Dockerfile syntax doesn't allow environment expansion in RUN, ENTRYPOINT, or CMD commands. Instead, these commands have two forms.
Exec form requires you to format the command as a JSON array, and doesn't do any processing on what you give it; it runs the command with an exact set of shell words and the exact strings in the command. Shell form doesn't have any special syntax, but wraps the command in sh -c, and that shell handles all of the normal things you'd expect a shell to do.
Using RUN as an example:
# These are the same:
RUN ["ls", "-la", "some directory"]
RUN ls -la 'some directory'
# These are the same (and print a dollar sign):
RUN ["echo", "$FOO"]
RUN echo \$FOO
# These are the same (and a shell does variable expansion):
RUN echo $FOO
RUN ["/bin/sh", "-c", "echo $FOO"]
If you have both ENTRYPOINT and CMD this expansion happens separately for each half. This is where the split you have causes trouble: none of these options will work:
# Docker doesn't expand variables at all in exec form
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
# ["python3", "/usr/src/app/${CODEPATH}"] with no expansion
# The "sh -c" wrapper gets interpreted as an argument to Python
ENTRYPOINT ["python3"]
CMD /usr/src/app/${CODEPATH}
# ["python3", "/bin/sh", "-c", "/usr/src/app/${CODEPATH}"]
# "sh -c" only takes one argument and ignores the rest
ENTRYPOINT python3
CMD ["/usr/src/app/${CODEPATH}"]
# ["/bin/sh", "-c", "python3", ...]
The only real effect of this ENTRYPOINT/CMD split is to make a container that can only run Python scripts, without special configuration (an awkward docker run --entrypoint option); you're still providing most of the command line in CMD, but not all of it. I tend to recommend that the whole command go in CMD, and you reserve ENTRYPOINT for a couple of more specialized uses; there is also a pattern of putting the complete command in ENTRYPOINT and trying to use the CMD part to pass it options. Either way, things will work better if you put the whole command in one directive or the other.

Pipe to Docker stdin with arguments picked up by Python argparse

I would like to pipe a file using the cat command into a Docker container along with arguments related to running the Python file. The command to run this Python file is going to be mentioned in the Dockerfile as
RUN ["python", "myfile.py"].
After building the Docker image using Docker build -t test ., I shall call the command cat file.txt | docker run -i test --param1 value.
I understand how arguments are accepted in the Python file using the argparse module and here is what I have:
parser = argparse.ArgumentParser()
parser.add_argument("param1")
args = parser.parse_args()
value = args.param1
My question is: how do I configure the Dockerfile to route the parameter passed from the command line (param1) into the Python file's argument parser?
My findings about doing this have just shown me how to write the cat .. | docker run ... command.

Python argparse: Mutually exclusive required group with a required option

I am trying to have a required mutually exclusive group with one required parameter. Below is the code which I have put
#!/usr/bin/python
import argparse
import sys
# Check for the option provided as part of arguments
def parseArgv():
parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group()
group.add_argument("-v", "--verbose", choices=[1,2,3,4],
help = "Increase verbosity")
group.add_argument("-q", "--quiet", action="store_true", help = "Run quietly")
name = parser.add_mutually_exclusive_group(required=True)
name.add_argument("-n", "--name", help = "Name of the virtual machine")
name.add_argument("-t", "--template", help = "Name of the template to use \
for creating vm. If path is not provided then it will be looked \
under template directory.")
parser.add_argument("-s", "--save", help = "Save the machine template. If \
path is not provided then it will be saved under template directory.");
#parser.add_argument("-k", "--kick_start", required = True, help = "Name of the \
# kick start file. If path is not provided then it will be look into http \
# directory.")
if len(sys.argv) == 1:
parser.print_help()
args = parser.parse_args()
if __name__ == '__main__':
parseArgv()
Now the output of this program as follow
$ python test.py
usage: test.py [-h] [-v {1,2,3,4} | -q] (-n NAME | -t TEMPLATE) [-s SAVE]
optional arguments:
-h, --help show this help message and exit
-v {1,2,3,4}, --verbose {1,2,3,4}
Increase verbosity
-q, --quiet Run quietly
-n NAME, --name NAME Name of the virtual machine
-t TEMPLATE, --template TEMPLATE
Name of the template to use for creating vm. If path
is not provided then it will be looked under template
directory.
-s SAVE, --save SAVE Save the machine template. If path is not provided
then it will be saved under template directory.
usage: test.py [-h] [-v {1,2,3,4} | -q] (-n NAME | -t TEMPLATE) [-s SAVE]
test.py: error: one of the arguments -n/--name -t/--template is required
But if I un-comment the from line 20 - 22 then the output change as below
$ python test.py
usage: test.py [-h] [-v {1,2,3,4} | -q] (-n NAME | -t TEMPLATE) [-s SAVE] -k
KICK_START
optional arguments:
-h, --help show this help message and exit
-v {1,2,3,4}, --verbose {1,2,3,4}
Increase verbosity
-q, --quiet Run quietly
-n NAME, --name NAME Name of the virtual machine
-t TEMPLATE, --template TEMPLATE
Name of the template to use for creating vm. If path
is not provided then it will be looked under template
directory.
-s SAVE, --save SAVE Save the machine template. If path is not provided
then it will be saved under template directory.
-k KICK_START, --kick_start KICK_START
Name of the kick start file. If path is not provided
then it will be look into http directory.
usage: test.py [-h] [-v {1,2,3,4} | -q] (-n NAME | -t TEMPLATE) [-s SAVE] -k
KICK_START
test.py: error: argument -k/--kick_start is required
But I want that either -n / -t along with -k become mandatory. How to achieve the same.
You have already achieved it! Argparse only prints the first error it finds, so while it may look like it's only checking -k, it actually recuires -n/-t too. You can see this by actually giving it the -k argument.
If you provide the -k argument, the error message will change from test.py: error: argument -k/--kick_start is required to test.py: error: one of the arguments -n/--name -t/--template is required.

Categories