in my Dockerfile:
ENTRYPOINT ["python3", "start1.py"]
When I run the docker image i want to override it with start2.py with a parameter year=2020. So I run:
docker run -it --entrypoint python3 start2.py year 2020 b43ssssss
It still runs start1.py, what am i doing wrong?
It still runs start1.py, what am i doing wrong?
Because anything pass as CMD with the entrypoint ENTRYPOINT ["python3", "start1.py"] will be pass as an argument to python file start1.py.
You can verify this by doing the following
import argparse, sys
print ("All ARGs",sys.argv[1:])
So the output will be
All ARGs ['start2.py', 'year', '2020', 'b43ssssss']
So Convert entrypoint to python3 only with some default CMD (start1.py) so you will have control which files to run.
ENTRYPOINT ["python3"]
# Default file to run
CMD ["start1.py"]
and then override at run time
docker run -it --rm my_image start2 year 2020 b43ssssss
Now the args should be
All ARGs ['year', '2020', 'b43ssssss']
For a couple of reasons, I tend to recommend using CMD over ENTRYPOINT as a default. This question is one of them: if you need to override the command at run time, it's much easier to do if you specify CMD.
# Change ENTRYPOINT to CMD
CMD ["python3", "start1.py"]
# Run an alternate script
docker run -it myimage \
python3 start2.py year 2020 b43ssssss
# Run a debugging shell
docker run --rm -it myimage \
bash
# Quickly double-check file contents
docker run --rm -it myimage \
ls -l /app
# This is what you're trying to avoid
docker run --rm -it \
--entrypoint /bin/ls \
myimage \
-l app
There is also a useful pattern of using ENTRYPOINT to run a secondary script that does some initial setup (waits for a database, rewrites config files, bootstraps a data store, ...) and then does exec "$#" to launch the CMD. I tend to reserve ENTRYPOINT for this pattern and default to CMD even if I don't specifically need it.
I do not recommend splitting the command with ENTRYPOINT ["python3"]. In the very specific case of wanting to run an alternate Python script it saves one word in the docker run command, but you still need to repeat the script name (unlike the "entrypoint-as-command" pattern) and you still need the --entrypoint option if you want to run something non-Python.
Related
I have a very simple docker file
FROM python:3
WORKDIR /usr/src/app
ENV CODEPATH=default_value
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
Here is my container command
docker run -e TOKEN="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
when I see container logs it shows
python3: can't open file '/usr/src/app/${TOKEN}': [Errno 2] No such file or directory
It looks like what you want to do is override the default path to the python file which is run when you launch the container. Rather than passing this option in as an environment variable, you can just pass the path to the file as an argument to docker run, which is the purpose of CMD in your dockerfile. What you set as the CMD option is the default, which users of your image can easily override by passing an argument to the docker run command.
doker run --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest "subfolder/testmypython.py"
Environment variable name CODEPATH but your setting TOKEN as Environment variable.
could you please try setting CODEPATH as env in following way
doker run -e CODEPATH="subfolder/testmypython.py" --name mycontainer -v /opt/testuser/pythoncode/:/usr/src/app/ -t -d python-image:latest
The way you've split ENTRYPOINT and CMD doesn't make sense, and it makes it impossible to do variable expansion here. You should combine the two parts together into a single CMD, and then use the shell form to run it:
# no ENTRYPOINT
CMD python3 /usr/src/app/${CODEPATH}
(Having done this, better still is to use the approach in #allan's answer and directly docker run python-image python3 other-script-name.py.)
The Dockerfile syntax doesn't allow environment expansion in RUN, ENTRYPOINT, or CMD commands. Instead, these commands have two forms.
Exec form requires you to format the command as a JSON array, and doesn't do any processing on what you give it; it runs the command with an exact set of shell words and the exact strings in the command. Shell form doesn't have any special syntax, but wraps the command in sh -c, and that shell handles all of the normal things you'd expect a shell to do.
Using RUN as an example:
# These are the same:
RUN ["ls", "-la", "some directory"]
RUN ls -la 'some directory'
# These are the same (and print a dollar sign):
RUN ["echo", "$FOO"]
RUN echo \$FOO
# These are the same (and a shell does variable expansion):
RUN echo $FOO
RUN ["/bin/sh", "-c", "echo $FOO"]
If you have both ENTRYPOINT and CMD this expansion happens separately for each half. This is where the split you have causes trouble: none of these options will work:
# Docker doesn't expand variables at all in exec form
ENTRYPOINT ["python3"]
CMD ["/usr/src/app/${CODEPATH}"]
# ["python3", "/usr/src/app/${CODEPATH}"] with no expansion
# The "sh -c" wrapper gets interpreted as an argument to Python
ENTRYPOINT ["python3"]
CMD /usr/src/app/${CODEPATH}
# ["python3", "/bin/sh", "-c", "/usr/src/app/${CODEPATH}"]
# "sh -c" only takes one argument and ignores the rest
ENTRYPOINT python3
CMD ["/usr/src/app/${CODEPATH}"]
# ["/bin/sh", "-c", "python3", ...]
The only real effect of this ENTRYPOINT/CMD split is to make a container that can only run Python scripts, without special configuration (an awkward docker run --entrypoint option); you're still providing most of the command line in CMD, but not all of it. I tend to recommend that the whole command go in CMD, and you reserve ENTRYPOINT for a couple of more specialized uses; there is also a pattern of putting the complete command in ENTRYPOINT and trying to use the CMD part to pass it options. Either way, things will work better if you put the whole command in one directive or the other.
I'm trying to make my first django container with uwsgi. It works as follows:
FROM python:3.5
RUN apt-get update && \
apt-get install -y && \
pip3 install uwsgi
COPY ./projects.thux.it/requirements.txt /opt/app/requirements.txt
RUN pip3 install -r /opt/app/requirements.txt
COPY ./projects.thux.it /opt/app
COPY ./uwsgi.ini /opt/app
COPY ./entrypoint /usr/local/bin/entrypoint
ENV PYTHONPATH=/opt/app:/opt/app/apps
WORKDIR /opt/app
ENTRYPOINT ["entrypoint"]
EXPOSE 8000
#CMD ["--ini", "/opt/app/uwsgi.ini"]
entrypoint here is a script that detects whether to call uwsgi (in case there are no args) or python manage in all other cases.
I'd like to use this container both as an executable (dj migrate, dj shell, ... - dj here is python manage.py the handler for django interaction) and as a long-term container (uwsgi --ini uwsgi.ini). I use docker-compose as follows:
web:
image: thux-projects:3.5
build: .
ports:
- "8001:8000"
volumes:
- ./projects.thux.it/web/settings:/opt/app/web/settings
- ./manage.py:/opt/app/manage.py
- ./uwsgi.ini:/opt/app/uwsgi.ini
- ./logs:/var/log/django
And I manage in fact to serve the project correctly but to interact with django to "check" I need to issue:
docker-compose exec web entrypoint check
while reading the docs I would have imagined I just needed the arguments (without entrypoint)
Command line arguments to docker run will be appended after
all elements in an exec form ENTRYPOINT, and will override all
elements specified using CMD. This allows arguments to be passed to
the entry point, i.e., docker run -d will pass the -d argument
to the entry point.
The working situation with "repeated" entrypoint:
$ docker-compose exec web entrypoint check
System check identified no issues (0 silenced).
The failing one if I avoid 'entrypoint':
$ docker-compose exec web check
OCI runtime exec failed: exec failed: container_linux.go:346: starting container process caused "exec: \"check\": executable file not found in $PATH": unknown
docker exec never uses a container's entrypoint; it just directly runs the command you give it.
When you docker run a container, the entrypoint and command you give to start it are combined to produce a single command line, and that command becomes the main container process. On the other hand, when you docker exec a command in a running container, it's interpreted literally; there aren't two parts of the command line to assemble, and the container's entrypoint isn't considered at all.
For the use case you describe, you don't need an entrypoint script to process the command in an unusual way. You can create a symlink to the manage.py script to give a shorter alias to run it, but make the default command be the uwsgi runner.
RUN chmod +x manage.py
RUN ln -s /opt/app/manage.py /usr/local/bin/dj
CMD ["uwsgi", "--ini", "/opt/app/uwsgi.ini"]
# Runs uwsgi:
docker run -p 8000:8000 myimage
# Manually trigger database migrations:
docker run --rm myimage dj migrate
I have a python script that i run with the following command :
python3 scan.py --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id 42
This works perfectly when I run it on the command line
IN my Dockerfile , I have tried ARG and ENV . none seem to work
#ARG api_token
#ARG username
#ARG password
# Configure AWS arguments
#RUN aws configure set aws_access_key_id $AWS_KEY \
# && aws configure set aws_secret_access_key $AWS_SECRET_KEY \
# && aws configure set default.region $AWS_REGION
### copy bash script and change permission
RUN mkdir workspace
COPY scan-api.sh /workspace
RUN chmod +x /workspace/scan-api.py
CMD ["/python3", "/workspace/scan-api.py"]
so how do i define this flagged argument in docker file ?
And whats the command run when running the image ?
You can do this in two ways as you want to override at run time.
As args to Docker run command
As an ENV to Docker run command
1st is simplest and you will not need to change anything Dockerfile
docker run --rm my_image python3 /workspace/scan-api.py --bar tet --api_token 5563ff177863e97a70a45dd4 --base_api_url http://101.102.34.66:4242/scanjob/ --base_report_url http://101.102.33.66:4242/ --job_id
and my simple script
import sys
print ("All ARGs",sys.argv[1:])
Using ENV you will need to change Dockerfile
I am posting the way for one, you can do this for all args
FROM python:3.7-alpine3.9
ENV API_TOKEN=default_token
CMD ["sh", "-c", "python /workspace/scan-api.py $API_TOKEN"]
So you can override them during run time or have the ability to run with some default value.
docker run -it --rm -e API_TOKEN=new_token my_image
CMD takes exactly the same arguments you used from the command line.
CMD ["/python3", "scan.py", "--api_token", "5563ff177863e97a70a45dd4", "--base_api_url", "http://101.102.34.66:4242/scanjob/", "--base_report_url", "http://101.102.33.66:4242/", "--job_id", "42"]
It's confusing.
You will need to use the SHELL form of ENTRYPOINT (or CMD) in order to have environment variable substitution, e.g.
ENTRYPOINT "/python3","/workspace/scan-api.py","--api-token=${TOKEN}" ...
And then run the container using something of the form:
docker run --interactive --tty --env=TOKEN=${TOKEN} ...
HTH!
I'm trying to Dockerize a web service using Tangelo and python.
My project structure is as follows:
test.py
requirements.txt
Dockerfile
test.py
import ...
def run(query):
...
return response
requirements.txt
... # other packages, numpy, open-cv, etc
tangelo
Dockerfile
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python python-pip git
EXPOSE 9220
ADD . /test
WORKDIR /test
RUN pip install -r requirements.txt
CMD "tangelo --port 9220"
I build this using
docker build -t "test" .
And run in detached mode using
docker run -p 9220:9220 -d "test"
But docker ps shows me that the docker stops almost as soon as it has started. I don't know what the problem is since I cannot inspect the logs.
I have tried a lot of things but I still can't figure this thing out.
Any ideas? If needed, I can provide more info.
EDIT:
When I build, step 8 says
Step 8/8 : ENTRYPOINT tangelo --port 9220
---> Running in 8b54841853ab
Removing intermediate container 8b54841853ab
So it means these are run in an intermediate container. Why is that and how can I prevent it?
TL;DR: Use:
CMD tangelo -np --port 9220
Instead of:
CMD "tangelo --port 9220"
Explanation:
You have two ways to debug the problem:
Inspect the logs of the container:
$ docker run -d test
28684015e519c0c8d644fccf98240d1465acabab6d16c19fd59c5f465b7f18af
$ sudo docker logs 28684015e519c
/bin/sh: 1: tangelo --port 9220: not found
Instead of running in detached mode, run in foreground with -i/--interactive (and optionally also -t/--tty):
$ docker run -ti test
/bin/sh: 1: tangelo --port 9220: not found
As you can see from above, the problem is that tangelo --port 9220 is being interpreted as a single argument. Split it by removing quotes:
CMD tangelo --port 9220 # this will use a shell
or use the "exec" form (preferred, given that you don't need any shell features):
CMD ["tangelo", "--port", "9220"] # this will execute tangelo directly
or even better use ENTRYPOINT + CMD:
ENTRYPOINT ["tangelo"]
CMD ["--port", "9220"] # this will execute tangelo directly
After this change, you'll still have a problem:
$ sudo docker run -ti test
...
[29/Apr/2018:02:43:39] TANGELO no such group 'nobody' to drop privileges to
Tangelo is complaining about the fact that there is no user and group named nobody inside the container. Again, there are two things you can do: add a RUN to create the nobody user and group, or run Tangelo with the -np/--no-drop-privileges option:
ENTRYPOINT ["tangelo"]
CMD ["--no-drop-privileges", "--port", "9220"]
It's fine if during the build you see intermediate containers: Docker creates them for each build step. The commands you specify in ENTRYPOINT or CMD are not executed during build, they're just recorded into the final image.
Let me clarify what I want to do.
I have a python script in my local machine that performs a lot of stuff and in certain point it have to call another python script that must be executed into a docker container. Such script have some input arguments and it returns some results.
So i want to figure out how to do that.
Example:
def function()
do stuff
.
.
.
do more stuff
''' call another local script that must be executed into a docker'''
result = execute_python_script_into_a_docker(python script arguments)
The docker has been launched in a terminal as:
docker run -it -p 8888:8888 my_docker
You can add your file inside docker container thanks to -v option.
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
And execute your python inside your docker with :
py /myFile.py
or with the host:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker py /myFile.py
And even if your docker is already running
docker exec -ti docker_name py /myFile.py
docker_name is available after a docker ps command.
Or you can specify name in the run command like:
docker run -it --name docker_name -v myFile.py:/myFile.py -p 8888:8888 my_docker
It's like:
-v absoluteHostPath:absoluteRemotePath
You can specify folder too in the same way:
-v myFolder:/customPath/myFolder
More details at docker documentation.
You can use docker's python SDK library. First you need to move your script there, I recommend you do it when you create the container or when you start it as Callmemath mentioned:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
Then to run the script using the library:
...
client = docker.client.from_env()
container = client.containers.get(CONTAINER_ID)
exit_code, output = container.exec_run("python your_script.py script_args")
...
you have to use docker exec -it image_name python /filename
Note: To use 'docker exec' you must run the container using docker run