I need to navigate to my docker-compose.yml file but writing cd docker-compose.yml doesn't work. The Terminal tells me that the catalog name is wrong or invalid, couldn't find the right translation. I want to navigate to the docker-compose so i can run docker up in the Terminal.
You are already in the directory for docker-compose, but you can't cd to a file, you want to run commands for it.
You need to do cd app.
docker-compose.yml is a file, inside the directory app. Since cd moves you to a directory, it makes no sense to launch it on a file.
The command you want once you are in the same folder as your docker-compose.yml is docker-compose up.
So to sum it up:
cd app
docker-compose up
Its not possible to cd into a file, you can use docker-compose command to run the file.
Related
I currently have a Python application running in a Docker container on Ubuntu 20.04.
In this Python application I want to create a text file every few minutes for use in other applications on the Ubuntu server. However, I am finding it challenging to create a file and save it on the server from inside a containerised Python application.
The application Dockerfile/start.sh/main.py files reside in /var/www/my_app_name/ and I would like to have the output.txt file that main.py creates in that same folder, the location of the Dockerfile/main.py source.
The text file is created in Python using a simple line:
text_file = open("my_text_file.txt", "wt")
I have seen that the best way to do this is to use a volume. My current docker run which is called by batch script start.sh includes the line:
docker run -d --name=${app} -v $PWD:/app ${app}
However I am not having much luck and the file is not created in the working directory where main.py resides.
A bind mount is the way to go and your -v $PWD:/app should work.
If you run a command like
docker run --rm -v $PWD:/app alpine /bin/sh -c 'echo Hello > /app/hello.txt'
you'll get a file in your current directory called hello.txt.
The command runs an Alpine image, maps the current directory to /app in the container and then echoes 'Hello' to /app/hello.txt in the container.
Ended up solving this myself, thanks to those who answered to get me on the right path.
For those finding this question in the future, creating a bind mound using the -v flag in docker run is indeed the correct way to go, just ensure that you have the correct path.
To get the file my_text_file.txt in the folder where my main.py and Dockerfile files are located, I changed my above to:
-v $PWD:/src/output ${app}
The /src/output is a folder structure within the container where my_text_file.txt is created, so in the Python, you should be saving the file using:
text_file = open("/src/output/my_text_file.txt", "wt")
The ${PWD} is where the my_text_file.txt is located on the local machine.
In short, I was not saving my file to the correct folder within the container in the Python code, it should have been saved to app in the container, and then my solution in the OP would have worked.
I have a Python script that creates a folder and writes a file in that folder. I can open the file and see its contents, but unfortunately I cannot edit it. I tried to add the command, RUN chmod -R 777 . but that didn't help either. In the created files and folders I see a lock sign as follows -
I have been able to recreate the same on a small demo. The contents are as follows -
demo.py
from pathlib import Path
Path("./created_folder").mkdir(parents=True, exist_ok=True)
with open("./created_folder/dummy.txt", "w") as f:
f.write("Cannot edit the contents of this file")
Dockerfile
FROM python:buster
COPY . ./data/
WORKDIR /data
RUN chmod -R 777 .
CMD ["python", "demo.py"]
docker-compose.yaml
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
After making these files run docker-compose up --build and see the results and then try to edit and save the created file dummy.txt - which should fail.
Any idea how to make sure that the created files can be edited and saved on the host?
EDIT:
I am running docker-compose rootless. I had read that it is not a good idea to run docker with sudo, so I followed the official instructions on adding user group etc.
I actually run the command docker-compose up --build not just docker compose up
I am on Ubuntu 20.04
username is same for both
~$ grep /etc/group -e "docker"
docker:x:999:username
~$ grep /etc/group -e "sudo"
sudo:x:27:username
Tried using PUID and PGID environment variables... but still the same issue.
Current docker-compose file -
version: "3.3"
services:
python:
working_dir: /data
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/data/
environment:
- PUID=1000
- PGID=1000
It seems like the Docker group is different from your user group and you're misunderstanding the way Docker RUN works.
Here's what happens when you run Docker build:
You pull the python:buster images
You copy the contents of the current directory to the docker image
You set the Work directory to the existing data directory
You set the permissions of the data directory to 777
Finally, the CMD is set to indicate what program should run
When do a docker-compose, the RUN command has no effect, but it's an Dockerfile instruction and not a runtime command. When your container runs, the writes the files with the user/group of the Docker command which your user doesn't have the permissions to edit.
Docker container runs in an isolated environment so that the UID/GID to user name/group name mapping is not shared to the process inside containers. You are facing two problems.
The files created inside containers is owned by root:root (by default, or a randomly chosen UID by the image)
The files created with permission 644 or 755 that not editable by host user.
You could refer to images provided by linuxserver.io to learn how they solve the two problems. for example qBittorrent. They provide two environment variables PUID and PGID to solve problem 1. UMASK option to solve problem 2.
Below is the related implementation in their custom Ubuntu base image. You could use that as your image base as well.
the place PUID and PGID is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1/root/etc/cont-init.d/10-adduser
the place UMASK is handled:
https://github.com/linuxserver/docker-baseimage-ubuntu/blob/b529d350b1438aa81e68a5d87eff39ade0f1c879/root/usr/bin/with-contenv#L2-L5
Personally, I'd prefer setting PUID and PGID to the ones in accord with the host (could get from id -u and id -g) and NOT touching UMASK unless absolutely necessary.
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.
I want to create a docker image. This is my work directory:
Dockerfile.in test.json test.py
And this is my Dockerfile:
COPY ./test.json /home/test.json
COPY ./test.py /home/test.py
RUN python test.py
When i launch this command:
docker build -f Dockerfile.in -t 637268723/test:1.0 .
It gives me this error:
`Step 1/5 : COPY ./test.json /home/test.json
---> Using cache
---> 6774cd225d60
Step 2/5 : COPY ./test.py /home/test.py
COPY failed: stat /var/lib/docker/tmp/docker-builder428014112/test.py:
no such file or directory`
Can anyone help me?
You should put those files into the same directory with Dockerfile.
Check if there's a .dockerignore file, if so, add:
!mydir/test.json
!mydir/test.py
Q1: Check your .dockerignore file in build path, the files or dir you want to copy may be in the ignore file list!
Q2: The COPY directive is based on the context in which you are building the image, so be aware of any problems with the directory where you are currently building the image! See: https://docs.docker.com/engine/reference/builder/#copy
I had to use the following command to start the build:
docker build .
Removing ./ from source path should resolve your issue:
COPY test.json /home/test.json
COPY test.py /home/test.py
I was also facing the same, I moved my docker file to root of the project. then it worked
Make sure the context you build your image with is set correctly. You can set the context when building as an argument.
Example:
docker build -f ./Dockerfile .. where '..' is the context in this example.
In your case removing ./ should solve the issue. I had another case wherein I was using a directory from the parent directory and docker can only access files present below the directory where Dockerfile is present
so if I have a directory structure /root/dir and Dockerfile /root/dir/Dockerfile
I cannot copy do the following
COPY root/src /opt/src
In my case, it was the comment line that was messing up the COPY command
I removed the comment after the COPY command and placed it to a dedicated line above the command. Surprisingly it resolved the issue.
Faulty Dockerfile command
COPY qt-downloader . # https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
Working Dockerfile command
# https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
COPY qt-downloader .
Hope it helps someone.
This may help someone else facing similar issue.
Instead of putting the file floating in the same directory as the Dockerfile, create a dir and place the file to copy and then try.
COPY mydir/test.json /home/test.json
COPY mydir/test.json /home/test.json
Another potential cause is that docker will not follow symbolic links by default (i.e don't use ln -s).
The following structure in docker-compose.yaml will allow you to have the Dockerfile in a subfolder from the root:
version: '3'
services:
db:
image: postgres:11
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 127.0.0.1:5432:5432
**web:
build:
context: ".."
dockerfile: dockerfiles/Dockerfile**
command: ...
...
Then, in your Dockerfile, which is in the same directory as docker-compose.yaml, you can do the following:
ENV APP_HOME /home
RUN mkdir -p ${APP_HOME}
# Copy the file to the directory in the container
COPY test.json ${APP_HOME}/test.json
COPY test.py ${APP_HOME}/test.py
# Browse to that directory created above
WORKDIR ${APP_HOME}
You can then run docker-compose from the parent directory like:
docker-compose -f .\dockerfiles\docker-compose.yaml build --no-cache
In my case, I had to put all my project files into a subdirectory
app -|inside app directory we have the following
| package.js
| src
| assets
Dockerfile
Then I copied files in his way
COPY app ./
I had such error while trying to build a docker image and push to the container registry. Inside my docker file I tried to copy a jar file from target folder and try to execute it with java -jar command.
I was solving the issue by removing .jar file and target folder from .gitignore file.
When using the Docker compose files, publish, publishes to obj/Docker/Publish. When I copied my files there and pointed my Dockerfile to this directory (as generated), it works…
The way docker look for file is from the current directory
i.e. if your command is
COPY target/xyz.jar app.jar
ADD target/xyz.jar app.jar
The xyz jar should be in the current/target directory - here current is the place where you have your docker file.
So if you have docker in a different dir. its better bring to main project directory and have a straight path to the jar being added or copied to the image.
I had the same issue with a .tgz file .
It was just about the location of the file. Ensure the file is in the same directory of the Dockerfile.
Also ensure the .dockerignore file directory doesn't exclude the file regex pattern.
In my case the solution was to place file in a directory and copy whole directory content with one command, instead of copying a single file:
COPY --chown=1016:1016 myfiles /home/myapp/myfiles
Make sure your path names are the same (case sensitive), folder name /dist/inventory
COPY /Dist/Inventory ... -- was throwing the error
COPY /dist/inventory ... -- working smoothly
Using nodejs/express/javascript!
In my case I had multiple CMD ["npm" "run"...] on the same Dockerfile, where you can only have 1. Hence, the first CMD ["npm" "run" "build"] was not being run while the /build folder was not created. Therefore the cmd to copy the build folder COPY --from=build /usr/src/app/build ./build failed!
Change from a CMD to a RUN npm run build to fix the issue.
My Dockerfile:
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# copy everything except content from .dockerignore
COPY . ./
#CMD ["npm", "run", "build"]
RUN npm run build
RUN ls -la | grep build
FROM node:lts-alpine3.17 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
RUN pwd
COPY package*.json ./
RUN npm ci --only=production
COPY --from=build /usr/src/app/build ./build
CMD ["node", "build/index.js"]```
Here is the reason why it happens, i.e. your local directory in the host OS where you are running the docker should have the file, otherwise you get this error
One solution is to :
use RUN cp <src> <dst> instead of
COPY <src> <dst>
then run the command it works!
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
</plugin>
I have a python application that is expecting first argument as file path. basically a configuration file.
This file it should get form volume/ mount in Docker
How to pass this:
. Snippets
Python:
with open(sys.argv[1], 'r') as ymlfile:
cfg = yaml.load(ymlfile)
Docker file
COPY install.py /wiki/install.py
CMD [ "python", "/wiki/install.py", "/config/config.yml"]
Run image command
sudo docker run -v /config:/home/example/config/ app-wiki
I am expecting config.yml file available at /home/example/config/ will be copied in /config dir and inside Docker file
it will be available
but its not working this way.
Where I am going wrong?
The way docker CMD works is you need to make sure that the file is copied into the correct location with correct permissions.
Example:
RUN mkdir -p /config
RUN mkdir -p /wiki
ADD cp <your-location> /config/config.yml
ADD install.py /wiki/install.py
CMD [ "python", "/wiki/install.py", "/config/config.yml"]
Also, docker will keep the same permissions that you have in your local directory. So, make sure you have correct permission set on both files.
The issue is that you have got the direction wrong. The format is <hostpath>:<containerpath>
Below
sudo docker run -v /config:/home/example/config/ app-wiki
should be
sudo docker run -v /home/example/config/:/config app-wiki
Doing the config present in /home/example/config/ will be available in container at /config folder
Edit-1
Added a bit of more explanation to clear you doubts.
COPY install.py /wiki/install.py
CMD [ "python", "/wiki/install.py", "/config/config.yml"]
When you run the above image it will expect a config to be available at /config/config.yml.
Now if you have a folder on your host /home/tarun/wikiconfig which has a config.yml file then you run the container using
sudo docker run -v /home/tarun/wikiconfig:/config app-wiki
If the name of the config.yml file is different in your wikiconfig folder, then you will mount the file to config.yml
sudo docker run -v /home/tarun/wikiconfig/myconfig.yml:/config/config.yml app-wiki
Both would override the config you added when you build the Dockerfile, because when you a mount a folder or file from host, anything with the same path inside container is no more accessible to the image