I am struggling with running the latest changes. Below are the details.
Dockerfile
FROM python:3.7.3
RUN mkdir -p /usr/apps
COPY test.py /usr/apps
RUN pip install mindsdb
CMD [ "python","test.py" ]
Build
docker build -t py37:custom .
Run
docker run -it -v /Development/PetProjects/mindsdb:/usr/apps/ py37:custom
But it shows only the changes at the time of build.
First of all while starting your container you are not using volumes but bind mounts. So you mount directory /Development/PetProjects/mindsdb on your host machine to /usr/apps/ directory. Every change made to files on your host machine in this directory, will be visible in the container, and the other way round.
If you wanted to use volumes, you could create one using docker volume create command and then running container with this volume : docker container run -v volume_name:path_in_container image_name. Then you would be able to stop container and run it again by passing this volume to run command and changes to path_in_container directory could be stored across container creations.
Another thing is that you are trying to mount /usr/apps/ in your container and you copied a python script there using Dockerfile. Note that in you current docker run command contents of /Development/PetProjects/mindsdb will replace content of /usr/apps/ in your container and if you do not have your script in /Development/PetProjects/mindsdb - script will not be visible in the container.
Moreover your CMD seems not to work because of path relativeness. You should change your CMD to CMD [ "python","/usr/apps/test.py" ] or use WORKDIR option - WORKDIR /usr/apps/ so your python command could be executed from this directory and script could be visible there.
More information about differences between volumes and bind mounts can be found in docker documentation.
Related
First off, I might have formulated the question inaccurately, feel free to modify if necessary.
Although I am quite new to docker and all its stuff, yet somehow managed to create an image (v2) and a container (cont) on my Win 11 laptop. And I have a demo.py which requires an .mp4 file as an arg.
Now, if I want to run the demo.py file, 1) I go to the project's folder (where demo.py lives), 2) open cmd and 3) run: docker start -i cont. This starts the container as:
root:834b2342e24c:/project#
Then, I should copy 4) my_video.mp4 from local project folder to container's project/data folder (with another cmd) as follows:
docker cp data/my_video.mp4 cont:project/data/.
Then I run: 5) python demo.py data/my_video.mp4. After a while, it makes two files: my_video_demo.mp4 and my_video.json in the data folder in the container. Similarly, I should copy them back to my local project folder: 6)
docker cp cont:project/data/my_video_demo.mp4 data/, docker cp cont:project/data/my_video_demo.json data/
Only then I can go to my local project/data folder and inspect the files.
I want to be able to just run a particular command that does 4) - 6) all in one.
I have read about -v option where, in my case, it would be(?) -v /data:project/data, but I don't know how to proceed.
Is it possible to do what I want? If everything is clear, I hope to get your support. Thank you.
You should indeed use Docker volumes. The following command should do it.
docker run -it -v /data:project/data v2
Well, I think I've come up with a way of dealing with it.
I have learned that with -v one can create a place that is shared between the local host and the container. All you need to do is that run the docker and provide -v as follows:
docker run --gpus all -it -v C:/Workspace/project/data:/project/data v2:latest python demo.py data/my_video_demo.mp4
--gpus - GPU devices to add to the container ('all' to pass all GPUs);
-it - tells the docker that it should open an interactive container instance.
Note that every time you run this, it will create a new container because of -it flag.
Partial credit to #bill27
I have a dockerfile
FROM python:3
WORKDIR /app
ADD ./venv ./venv
ADD ./data/file1.csv.gz ./data/file1.csv.gz
ADD ./data/file2.csv.gz ./data/file2.csv.gz
ADD ./requirements.txt ./venv/requirements.txt
WORKDIR /app/venv
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "./src/script.py", "/app/data/file1.csv.gz", "/app/data/file2.csv.gz"]
After building an image from it and running it, the image runs the app as it should, but then the container shuts down immediately after finishing. This is definitely problematic since I can't expect the output file.
I have tried using docker run -d -t <imgname> and docker ps shows the app for a few seconds, but once again, as soon as it finishes the process, the container shuts itself down.
So it's impossible to access, even with docker exec <imgid> -it --entrypoint /bin/bash, it just immediately exits.
I've also tried adding a last RUN /bin/bash after the last CMD but it doesn't help either.
What can I do actually be able to log into the container and inspect the file?
As long as the container hasen't been removed, you will be able to get at the data. You can find the name of the container using docker ps -a.
Then, if you know the location of the file, you can copy it to your host using
docker cp <container name>:<file> .
Alternatively, you can commit the contents of the container to a new image and run a shell in that using
docker commit <container name> newimagename
docker run --rm -it newimagename /bin/bash
Then you can look around in the container and find your files.
Unfortunately there's no way to start the container up again and look around in it. docker start will start the container, but will run the same command again as was run when you did docker run.
I'm trying to find a way to run .exe application in python (I mean making virtual box where you can run .exe programs). And when you run the application its will only affect the folder where python script is.
Dockerfile
FROM python:3
ADD main.py .
ADD the.exe
CMD [ "python", "main.py"]
main.py
import os
os.startfile("/the.exe")
Build
docker build -t isolatedExe:latest .
Run
docker run isolatedExe:latest
Next interact with the container by using
docker exec -i -t <image> /bin/bash
Note: Find the image id with docker ps
I have set up Docker Toolbox on a Win 10 machine. I have some simple single file Python scripts that I want to run in Docker, just for learning purpose.
Started learning Docker today, and Python 3 days ago.
I assume I have set up Docker correctly, I can run the example hello-world image. No error messages during setup.
I am following an instruction from here https://runnable.com/docker/python/dockerize-your-python-application,
which says:
If you only need to run a simple script (with a single file), you can avoid writing a complete Dockerfile. In the examples below, assume you store my_script.py in /usr/src/widget_app/, and you want to name the container my-first-python-script:
docker run -it --rm --name my-first-python-script -v "$PWD":/usr/src/widget_app python:3 python my_script.py
If I type pwd, it shows:
/c/Program Files/Docker Toolbox
And the script I want to run is located here:
C:\Docker\Python\my_script.py
This is what I think should work:
docker run -it --rm --name my-first-python-script -v "$PWD":/c/Docker/Python python:3 python my_script.py
No matter how I try to specify the file directory, I get an error:
python: can't open file 'my_script.py': [Errno 2] No such file or directory
When you run -v "$PWD":/c/Docker/Python, you are saying you want to link your current working directory to the path /c/Docker/Python in the container, which isn't what you want to do. What you are trying to do is link C:\Docker\Python\ on your host to the container folder /usr/src/widget_app.
This command will put your script inside the container path /usr/src/widget_app, then run it:
docker run -it --rm --name my-first-python-script -v /c/Docker/Python:/usr/src/widget_app python:3 python /usr/src/widget_app/my_script.py
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.