I need to navigate to my docker-compose.yml file but writing cd docker-compose.yml doesn't work. The Terminal tells me that the catalog name is wrong or invalid, couldn't find the right translation. I want to navigate to the docker-compose so i can run docker up in the Terminal.
You are already in the directory for docker-compose, but you can't cd to a file, you want to run commands for it.
You need to do cd app.
docker-compose.yml is a file, inside the directory app. Since cd moves you to a directory, it makes no sense to launch it on a file.
The command you want once you are in the same folder as your docker-compose.yml is docker-compose up.
So to sum it up:
cd app
docker-compose up
Its not possible to cd into a file, you can use docker-compose command to run the file.
FROM python:3
WORKDIR /Users/vaibmish/Documents/new/graph-report
RUN pip install graphreport==1.2.1
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
CMD [ graphreport ]
THIS IS PART OF DCOKERFIILE
i wish to remove cd volumes from tha file and have a command like -v there so that whoever runs that can give his or her own volume path in same
The line
CMD [ cd /Users/vaibmish/Documents/new/graph-report/graphreport_metrics ]
is wrong. You achieve the same with WORKDIR:
WORKDIR /Users/vaibmish/Documents/new/graph-report/graphreport_metrics
WORKDIR creates the path if it doesn't exist and then changes the current directory to that path (same as mkdir -p /path/new && cd /path/new)
You can also declare the path as a volume and instruct who runs the container to provide their own path (docker run -v host_path:container_path ...)
VOLUME /Users/vaibmish/Documents/new/graph-report
A final note: It looks like these paths are from the host. Remember that the paths in the Dockerfile are not host paths. They are paths inside the container.
Typical practice here is to pick some fixed path inside the Docker container. It should be a different path from where your application is installed; it does not need to match any particular host path at all.
FROM python:3
RUN pip3 install graphreport==1.2.1
WORKDIR /data
CMD ["graphreport"]
docker build -t me/graphreport:1.2.1 .
docker run --rm \
-v /Users/vaibmish/Documents/new/graph-report:/data \
me/graphreport:1.2.1
(Remember that only the last CMD has an effect, and if it's not a well-formed JSON array, Docker will interpret it as a shell command. What you show in the question would run the test(1) command and not the program you're installing.)
If you're trying to install a single package from PyPI and just run it on local files, a Python virtual environment will be much easier to set up than anything based on Docker, and will essentially work as you expect:
python3 -m venv graphreport
. graphreport/bin/activate
pip3 install graphreport==1.2.1
cd /Users/vaibmish/Documents/new/graph-report
graphreport
deactivate # switch back to system Python/pip
All of the installed Python code is inside the graphreport virtual environment directory, and if you don't need this application again, you can just delete the directory tree.
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.
I want to create a docker image. This is my work directory:
Dockerfile.in test.json test.py
And this is my Dockerfile:
COPY ./test.json /home/test.json
COPY ./test.py /home/test.py
RUN python test.py
When i launch this command:
docker build -f Dockerfile.in -t 637268723/test:1.0 .
It gives me this error:
`Step 1/5 : COPY ./test.json /home/test.json
---> Using cache
---> 6774cd225d60
Step 2/5 : COPY ./test.py /home/test.py
COPY failed: stat /var/lib/docker/tmp/docker-builder428014112/test.py:
no such file or directory`
Can anyone help me?
You should put those files into the same directory with Dockerfile.
Check if there's a .dockerignore file, if so, add:
!mydir/test.json
!mydir/test.py
Q1: Check your .dockerignore file in build path, the files or dir you want to copy may be in the ignore file list!
Q2: The COPY directive is based on the context in which you are building the image, so be aware of any problems with the directory where you are currently building the image! See: https://docs.docker.com/engine/reference/builder/#copy
I had to use the following command to start the build:
docker build .
Removing ./ from source path should resolve your issue:
COPY test.json /home/test.json
COPY test.py /home/test.py
I was also facing the same, I moved my docker file to root of the project. then it worked
Make sure the context you build your image with is set correctly. You can set the context when building as an argument.
Example:
docker build -f ./Dockerfile .. where '..' is the context in this example.
In your case removing ./ should solve the issue. I had another case wherein I was using a directory from the parent directory and docker can only access files present below the directory where Dockerfile is present
so if I have a directory structure /root/dir and Dockerfile /root/dir/Dockerfile
I cannot copy do the following
COPY root/src /opt/src
In my case, it was the comment line that was messing up the COPY command
I removed the comment after the COPY command and placed it to a dedicated line above the command. Surprisingly it resolved the issue.
Faulty Dockerfile command
COPY qt-downloader . # https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
Working Dockerfile command
# https://github.com/engnr/qt-downloader -> contains the script to auto download qt for different architectures and versions
COPY qt-downloader .
Hope it helps someone.
This may help someone else facing similar issue.
Instead of putting the file floating in the same directory as the Dockerfile, create a dir and place the file to copy and then try.
COPY mydir/test.json /home/test.json
COPY mydir/test.json /home/test.json
Another potential cause is that docker will not follow symbolic links by default (i.e don't use ln -s).
The following structure in docker-compose.yaml will allow you to have the Dockerfile in a subfolder from the root:
version: '3'
services:
db:
image: postgres:11
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 127.0.0.1:5432:5432
**web:
build:
context: ".."
dockerfile: dockerfiles/Dockerfile**
command: ...
...
Then, in your Dockerfile, which is in the same directory as docker-compose.yaml, you can do the following:
ENV APP_HOME /home
RUN mkdir -p ${APP_HOME}
# Copy the file to the directory in the container
COPY test.json ${APP_HOME}/test.json
COPY test.py ${APP_HOME}/test.py
# Browse to that directory created above
WORKDIR ${APP_HOME}
You can then run docker-compose from the parent directory like:
docker-compose -f .\dockerfiles\docker-compose.yaml build --no-cache
In my case, I had to put all my project files into a subdirectory
app -|inside app directory we have the following
| package.js
| src
| assets
Dockerfile
Then I copied files in his way
COPY app ./
I had such error while trying to build a docker image and push to the container registry. Inside my docker file I tried to copy a jar file from target folder and try to execute it with java -jar command.
I was solving the issue by removing .jar file and target folder from .gitignore file.
When using the Docker compose files, publish, publishes to obj/Docker/Publish. When I copied my files there and pointed my Dockerfile to this directory (as generated), it works…
The way docker look for file is from the current directory
i.e. if your command is
COPY target/xyz.jar app.jar
ADD target/xyz.jar app.jar
The xyz jar should be in the current/target directory - here current is the place where you have your docker file.
So if you have docker in a different dir. its better bring to main project directory and have a straight path to the jar being added or copied to the image.
I had the same issue with a .tgz file .
It was just about the location of the file. Ensure the file is in the same directory of the Dockerfile.
Also ensure the .dockerignore file directory doesn't exclude the file regex pattern.
In my case the solution was to place file in a directory and copy whole directory content with one command, instead of copying a single file:
COPY --chown=1016:1016 myfiles /home/myapp/myfiles
Make sure your path names are the same (case sensitive), folder name /dist/inventory
COPY /Dist/Inventory ... -- was throwing the error
COPY /dist/inventory ... -- working smoothly
Using nodejs/express/javascript!
In my case I had multiple CMD ["npm" "run"...] on the same Dockerfile, where you can only have 1. Hence, the first CMD ["npm" "run" "build"] was not being run while the /build folder was not created. Therefore the cmd to copy the build folder COPY --from=build /usr/src/app/build ./build failed!
Change from a CMD to a RUN npm run build to fix the issue.
My Dockerfile:
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# copy everything except content from .dockerignore
COPY . ./
#CMD ["npm", "run", "build"]
RUN npm run build
RUN ls -la | grep build
FROM node:lts-alpine3.17 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
RUN pwd
COPY package*.json ./
RUN npm ci --only=production
COPY --from=build /usr/src/app/build ./build
CMD ["node", "build/index.js"]```
Here is the reason why it happens, i.e. your local directory in the host OS where you are running the docker should have the file, otherwise you get this error
One solution is to :
use RUN cp <src> <dst> instead of
COPY <src> <dst>
then run the command it works!
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<configuration>
<images>

</images>
</configuration>
</plugin>
I have logged into the docker from the below command, now from the python script i want to copy the file from docker to host system how to do this
sudo docker run -ti video:new /bin/bash
import os
os.system('cp /tmp/a.txt HOST:/tmp/a.txt')
Map a volume to share data with your host from the container.
docker run -v /tmp/:/tmp/ -ti video:new /bin/bash
Then let your python script copy the file to the /tmp directory inside your container.
import os
os.system('cp /path/to/a.txt /tmp/a.txt')
Through to the -v mapping, the file is placed on the docker host in the directory /tmp. Once you close your docker container, the file will still exist on the host as /tmp/a.txt.
The container can't copy information outside its isolation. If you wanna share information between container and host, please use volume mapper to do that (-v):
https://docs.docker.com/userguide/dockervolumes/