I have a docker container that looks like this
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
CMD [ "python", "./lodc.py", "file1.json", "file2.json" ]
it needs to take an env file and then two different files as the arguments that is needed for the script lodc.py to run. I have tried mounting them like described here Passing file as argument to Docker container but I cannot get it to work. It is important to keep the two files isolated because those files will be changing frequently so it doesn't make sense to put them into the container. Here is what I've been running
docker run --env-file /Users/Documents/github/datasets/tmp/.env -v /Users/Documents/datasets/files:/Users/Documents/github/datasets datasets datasets/file1.json datasets/file2.json
Basically I would like to just build and run the docker container and be able to munipulate the two argument files in another directory whenever I want without issue.
The env file is being passed correctly and it is failing because it cannot find the file.json directory. I am new to docker and any help would be greatly appreciated.
Thanks.
I think you just got the run command wrong. Try this one:
docker run \
--env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files/file1.json:/file1.json \
-v /Users/Documents/github/datasets/files/file2.json:/file2.json \
<your-built-docker-image-name>
I'm not sure about your paths, but you need to run it with two different volumes (-v argument).
If the one you are trying does not work. You can also try below as an option(keep in mind that you will have to build docker image every time input files change):
I created the below directory structure based on your comments:
FROM python:3.8
ADD lodc.py .
COPY dir1/file1.json .
COPY dir2/file2.json .
CMD [ "python", "./lodc.py", "file1.json", "file2.json"]
Please let me know if you need more help.
To give a complete answer you can do it three ways:
In the Docker file you can use the Copy command - this is nice, but it kind of hides what volumes are being used if you use an orchestration tool like Kubernetes or Docker-Compose down the line
FROM python:3.8
ADD lodc.py .
COPY dir1/file1.json .
COPY dir2/file2.json .
CMD [ "python", "./lodc.py", "file1.json", "file2.json"]
You can reference a volume in command line using the -v argument; this is less preferable as it doesn't codify the volumes and it is dependent on the directory the command is executed in
docker run \
--env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files/file1.json:/file1.json \
-v /Users/Documents/github/datasets/files/file2.json:/file2.json \
<your-built-docker-image-name>
You can reference a volume in a docker-compose file; this option is preferred because the volume referenced is in code, it is explicit the volume is included, and it isn't dependent on where the command is executed
version: '3.9'
services:
base:
build:
context: .
dockerfile: <relative-path-to-dockerfile-from-docker-compose>
image: <desired-image-name>
env_file:
- /Users/Documents/github/datasets/tmp/.env
volumes:
- /Users/Documents/datasets/files/file1.json:/file1.json
- /Users/Documents/github/datasets/files/file2.json:/file2.json
The last option will reduce headaches in the long-term as you scale because is the most explicit and stable; Good Luck!
You just have to make clear where you are running your python (i.e, where are "you" inside the container).
In other words, you have to define (somehow) your WORKDIR. By default, the workdir is at / (the container's root directory).
Let's suppose, for simplicity, that you define your workdir to be inside your "datasets/files" directory. (Also, I'll use a shorter path inside the container...because we can ;)
So, I'd suggest the following for your Dockerfile and docker-run:
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
CMD [ "python", "/lodc.py", "file1.json", "file2.json" ]
build:
$ docker build -t datasets
run:
$ docker run --env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files:/mnt/datasets \
-w /mnt/datasets
datasets
Or, you could free your Dockerfile a little bit, and push it to your run command:
FROM python:3.8
ADD lodc.py .
RUN pip install requests python-dotenv
ENTRYPOINT [ "python", "/lodc.py" ]
build:
$ docker build -t datasets
run:
$ docker run --env-file /Users/Documents/github/datasets/tmp/.env \
-v /Users/Documents/datasets/files:/mnt/datasets \
datasets /mnt/datasets/file1.json /mnt/datasets/file2.json
I didn't test it, but it looks (syntactically) ok.
Related
I can't run 2 containers whereas I can run each one them separately.
I have this 1st container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test1.py /app/container1/test1.py
WORKDIR /app/
CMD python3 container1/test1.py
I have this 2nd container/image related to this DockerFile
FROM debian:latest
RUN apt-get update && apt-get install python3-pip -y && pip3 install requests
ADD test2.py /app/container2/test2.py
WORKDIR /app/
CMD python3 container2/test2.py
No issues to create images:
docker image build ./authentif -t test1:latest
docker image build ./authoriz -t test2:latest
When I run the 1st container with this command:
docker container run -it --network my_network --name test1_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test1:latest
it works.
And If i want to check my volume:
sudo ls /var/lib/docker/volumes/my_volume/_data
I can see data in my volume
However when I want run the 2nd container:
docker container run -it --network my_network --name test2_container\
--mount type=volume,src=my_volume,dst=/app -e LOG=1\
--rm test2:latest
I have this error:
python3: can't open file '/app/container2/test2.py': [Errno 2] No such file or directory
If i delete everything and start over : if I start running the 2nd container it works but then id I want to run the 1st container, i have the error again.
why is that?
in my container1, let's assume that my script python writes data in a file, for example :
import os
print("test111111111")
if os.environ.get('LOG') == "1":
print("1111111")
with open('record.log', 'a') as file:
file.write("file11111")
I can't reproduce your issue. When I start 2 containers using
docker run -d --rm -v myvolume:/app --name container1 debian tail -f /dev/null
docker run -d --rm -v myvolume:/app --name container2 debian tail -f /dev/null
and then do
docker exec container1 /bin/sh -c 'echo hello > /app/hello.txt'
docker exec container2 cat /app/hello.txt
it prints out 'hello' as expected.
You are mounting the volume over /app, the directory that contains your application code. That hides the code and replaces it with something else.
The absolute best approach here, if you can handle it, is to avoid sharing files at all. Keep the data somewhere like a relational database (which may be stateful). Don't mount anything on to your containers. Especially if you're looking forward to a clustered environment like Kubernetes, sharing files can be surprisingly tricky.
If you can't get rid of the shared directory, then put it somewhere other than /app. You might need to configure the alternate directory using an environment variable.
docker container run ... \
--mount type=volume,src=my_volume,dst=/data \ # /data, not /app
...
What's actually happening in your setup is that Docker has a feature to copy the contents of the image into an empty named volume on first use. This only happens if the volume is completely empty, this only happens with a named Docker volume and not bind mounts, and this doesn't happen on other container systems like Kubernetes. (I'd discourage actually relying on this behavior.)
So when you run the first container, it sees that my_volume is empty and copies the test1 image into it; then the container sees the code it expects it in /app and it apparently works fine. The second container sees my_volume is non-empty, and so the volume contents (with the first image's code) hide what was in the image (the second image's code). I'd expect, if you started from scratch, whichever of the two containers you started first would work, but not the other, and if you change the code in the working image, a new container won't see that change (it will use the code out of the volume).
I am new to Docker and trying to Dockerize my FastAPI application.
First I created a Dockerfile:
FROM python:3.9.9
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
Then ran the following command:
docker build -t fastapi .
The command ran successfully.
After that I created the following docker-compose.yml:
version: "3"
services:
api:
build: .
ports:
- 8000:8000
env_file:
./.env
Then ran the following command:
docker-compose up -d
Ran successfully:
Network fastapi_default Created 0.7s
- Container fastapi_api_1 Started
Then to check if its running properly I ran the following command:
docker ps -a
And it showed that Container exited few seconds after it was created.
Then I ran this command:
docker logs fastapi_api_1
And it says:
/bin/sh: 1: [uvicorn,: not found
Not sure what is the reason. Tried some solutions that I found online but nothing worked out. I do have uvicorn in my requirements.txt file.
Help will be appriciated. Please let me know if additional information is required.
Note: You don't need to do docker build -t fastapi . manually. Docker-compose will do it for you (because you set build: .) But! You must run up command with --build parameter (docker-compose up --build) to force rebuild image even if it exists.
And about your problem:
Here is a very good article (and one more) about RUN, ENTRYPOINT and CMD
Here is three forms for CMD:
CMD ["executable","param1","param2"] (exec form, preferred)
CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)
CMD command param1 param2 (shell form)
According error, looks like Docker interpreting CMD as a shell form or additional parameters for default ENTRYPOINT
Actually still not sure why it happens, but changing CMD to
CMD uvicorn app.main:app --host 0.0.0.0 --port 8000
or
ENTRYPOINT ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
should solve your problem
Also it will be better to use full path to uvicorn executable (/usr/bin/uvicorn or where it installed by default?). It is just my opinion but, that is may be a reason why CMD is interpreted as parameters instead of command.
PS In addition here is note from docker docs:
Note
The exec form is parsed as a JSON array, which means that you must use double-quotes (“) around words not single-quotes (‘).
So exec form syntax must meet the conditions of JSON syntax.
So, basically there was something wrong with the docker. I had created mulitple images. I removed all of them and ran the same commands again and it worked. I don't know the exact reason but its working now.
What I think was happening is that instead of deleting the old images and creating new one. I was just doing
docker-compose down
and then
docker-compose up -t
I think that command was not taking the changes into consideration.
then i ran:
docker-compose up --build
and I think that created a new image and it worked.
Then I noticed that there were atleast 10 images created. I deleted all of them and ran the same commands:
docker build .
docker-compose up -t
and it worked fine again.
So basically instead of using creating new image it was using the old one which was not created correctly:
docker-compose up --build
In short you should use docker-compose up --build whenever you make changes in your dockerfile or docker-compose.yml instead of docker-compose up -t
It might be confusing but I am also very new to Docker.
Thanks for the help everyone!
I've had the same issue with a Dockerfile in my docker-compose environment containing
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN pip install uvicorn==0.20.0
CMD ["uvicorn", "--host", "0.0.0.0", "--port", "6000", "app:app"]
So I don't need an extra command:line in my docker-compose.yml
It turned out that if you install uvicorn in your requirements.txt, as I like to do for testing purposes
then it gets installed locally, and
RUN pip install uvicorn==0.20.0 is skipped, which means,
there is no /usr/bin/uvicorn 'executable' available, just somewhere in site-packages and CMD will fail.
So, if you use uvicorn in your requirements.txt, and in Dockerfile as well, you can maybe
force the reinstallation
RUN pip install --ignore-installed uvicorn==0.20.0
in the Dockerfile,
or set the PATH to find it somewhere in the guts of python,
or - what I find is a better solution to keep the image size small -
is to remove uvicorn from requirements.txt...
FROM python:3
WORKDIR /Users/vaibmish/Documents/new/graph-report
RUN pip install graphreport==1.2.1
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
CMD [ graphreport ]
THIS IS PART OF DCOKERFIILE
i wish to remove cd volumes from tha file and have a command like -v there so that whoever runs that can give his or her own volume path in same
The line
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
is wrong. You achieve the same with WORKDIR:
WORKDIR /Users/vaibmish/Documents/new/graph-report/graphreport_metrics
WORKDIR creates the path if it doesn't exist and then changes the current directory to that path (same as mkdir -p /path/new && cd /path/new)
You can also declare the path as a volume and instruct who runs the container to provide their own path (docker run -v host_path:container_path ...)
VOLUME /Users/vaibmish/Documents/new/graph-report
A final note: It looks like these paths are from the host. Remember that the paths in the Dockerfile are not host paths. They are paths inside the container.
Typical practice here is to pick some fixed path inside the Docker container. It should be a different path from where your application is installed; it does not need to match any particular host path at all.
FROM python:3
RUN pip3 install graphreport==1.2.1
WORKDIR /data
CMD ["graphreport"]
docker build -t me/graphreport:1.2.1 .
docker run --rm \
-v /Users/vaibmish/Documents/new/graph-report:/data \
me/graphreport:1.2.1
(Remember that only the last CMD has an effect, and if it's not a well-formed JSON array, Docker will interpret it as a shell command. What you show in the question would run the test(1) command and not the program you're installing.)
If you're trying to install a single package from PyPI and just run it on local files, a Python virtual environment will be much easier to set up than anything based on Docker, and will essentially work as you expect:
python3 -m venv graphreport
. graphreport/bin/activate
pip3 install graphreport==1.2.1
cd /Users/vaibmish/Documents/new/graph-report
graphreport
deactivate # switch back to system Python/pip
All of the installed Python code is inside the graphreport virtual environment directory, and if you don't need this application again, you can just delete the directory tree.
I'm new to Docker. I have a Python program that I run in the following way.
python main.py --s=aws --d=scylla --p=4 --b=15 --e=local -w
Please note the double hyphen -- for the first four parameters and single hyhpen '-' for the last one.
I'm trying to run this inside a Docker container. Here's my Dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python","app.py","--s","(source)", "--d","(database)","--w", "(workers)", "--b", "(bucket)", "--e", "(env)", "-w"]
I'm not sure if this is will work as I don't know exactly how to test and run this. I want to run the Docker image with the following port mappings.
docker run --name=something -p 9042:9042 -p 7000:7000 -p 7001:7001 -p 7199:7199 -p 9160:9160 -p 9180:9180 -p 10000:10000 -d user/something
How can I correct the Docker file? Once I build an image how to run it?
First, fix the dockerfile:
FROM python:3.6
COPY . /app
WORKDIR /app
# optional: it is better to chain commands to reduce the number of created layers
RUN pip install --upgrade pip \
&& pip install --no-cache-dir -r requirements.txt
# mandatory: "--s=smth" is one argument
# optional: it's better to use environment variables for source, database etc
CMD ["python","app.py","--s=(source)", "--d=(database)","--w=(workers)", "--b=(bucket)", "--e=(env)", "-w"]
then, build it:
docker build -f "<dockerfile path>" -t "<tag to assign>" "<build dir (eg .)>"
Then, you can just use the assigned tag as an image name:
docker run ... <tag assigned>
UPD: I got it wrong the first time, tag should be used instead of the image name, not the instance name
UPD2: With the first response, I assumed you're going to hardcode parameters and only mentioned it is better to use environment variables. Here is an example how to do it:
First, better, option is to use check environment variables directly in your Python script, instead of command line arguments.
First, make your Python script to read environment variables.
The quickest dirty way to do so is to replace CMD with something like:
CMD ["sh", "-c", "python app.py --s=$SOURCE --d=$DATABASE --w=$WORKERS ... -w"]
(it is common to use CAPS names for environment variables)
It will be better, however, to read environment variables directly in your Python script instead of command line arguments, or use them as defaults:
# somewere in app.py
import os
...
DATABASE = os.environ.get('DATABASE', default_value) # can default ot args.d
SOURCE = os.environ.get('SOURCE') # None by default
# etc
Don't forget to update dockerfile as well in this case
# Dockerfile:
...
CMD ["python","app.py"]
Finally, pass environment variables to your run command:
docker run --name=something ... -e DATABASE=<dbname> -e SOURCE=<source> ... <tag assigned at build>
There are more ways to pass environment variables, I'll just refer to the official documentation here:
https://docs.docker.com/compose/environment-variables/
I have been reading this tutorial:
https://prakhar.me/docker-curriculum/
along with other tutorials, and Docker docks and I am still not completely clear on how to do this task.
The problem
My local machine is running Mac OS X, and I would like to set up a development environment for a python project. In this project I need to run call an api from a docker repo bamos/openface. The project also has some dependencies such as yaml, etc. If I just mount my local to openface, ie:
docker run -v path/to/project:/root/project -p 9000:9000 -p 8000:8000 -t -i bamos/openface /bin/bash
Then I need to install yaml and other dependencies, and every time I exit the container the installations would be lost. Additionally, it is also much slower for some reason. So the right way to do this is using Docker compose, but I am not sure how to proceed from here.
UPDATE
In response to the comments, I will now update the problem:
Right now my Dockerfile looks like this:
FROM continuumio/anaconda
ADD . /face-off
WORKDIR /face-off
RUN pip install -r requirements.txt
EXPOSE 5000
CMD [ "python", "app.py" ]
It is important that I build from anaconda since a lot of my code will use numpy and scipy. Now I also need bamos/openface, so I tried adding that to my docker-compose.yml file:
version: '2'
services:
web:
build: .
command: python app.py
ports:
- "5000:5000"
volumes:
- .:/face-off
openface:
build: bamos/openface
However, I am getting error:
build path path/to/face-off/bamos/openface either does not exist, is not accessible, or is not a valid URL
So I need to pass bamos/openface the right way so I can build a container with it. Right now bamos/openface is listed when I do docker images.