I'm trying to find a way to run .exe application in python (I mean making virtual box where you can run .exe programs). And when you run the application its will only affect the folder where python script is.
Dockerfile
FROM python:3
ADD main.py .
ADD the.exe
CMD [ "python", "main.py"]
main.py
import os
os.startfile("/the.exe")
Build
docker build -t isolatedExe:latest .
Run
docker run isolatedExe:latest
Next interact with the container by using
docker exec -i -t <image> /bin/bash
Note: Find the image id with docker ps
I am having an odd problem when running a python script in a docker container.
When I start the script in the same line as I am starting the docker containter e.g.
docker run -it --rm <containter>:<version> /bin/bash --login -c "python /opt/project/main.py"
It raises an ImportError for a module.
However when I first start the docker conainer and afterwards start the script afterwards
docker run -it --rm <containter>:<version> /bin/bash
python /opt/project/main.py
everything behaves as it should. So only when i start the script in the same line, the issue occurs.
Hope you can give me a heads-up. Thanks!
I did find a solution which I am happy to share with a random googlers:
The issue I had was that the python dependencies I used were source build catkin dependencies. Thus the setup.bash file from the catkin workspace needed to be sourced in order for the libraries to be found. Since the .bashrc is not sourced when starting the docker like I mentioned, it as to be done manually:
docker run -it --rm <containter>:<version> /bin/bash --login -c "source /path/to/setup.bash && python /opt/project/main.py"
I am struggling with running the latest changes. Below are the details.
Dockerfile
FROM python:3.7.3
RUN mkdir -p /usr/apps
COPY test.py /usr/apps
RUN pip install mindsdb
CMD [ "python","test.py" ]
Build
docker build -t py37:custom .
Run
docker run -it -v /Development/PetProjects/mindsdb:/usr/apps/ py37:custom
But it shows only the changes at the time of build.
First of all while starting your container you are not using volumes but bind mounts. So you mount directory /Development/PetProjects/mindsdb on your host machine to /usr/apps/ directory. Every change made to files on your host machine in this directory, will be visible in the container, and the other way round.
If you wanted to use volumes, you could create one using docker volume create command and then running container with this volume : docker container run -v volume_name:path_in_container image_name. Then you would be able to stop container and run it again by passing this volume to run command and changes to path_in_container directory could be stored across container creations.
Another thing is that you are trying to mount /usr/apps/ in your container and you copied a python script there using Dockerfile. Note that in you current docker run command contents of /Development/PetProjects/mindsdb will replace content of /usr/apps/ in your container and if you do not have your script in /Development/PetProjects/mindsdb - script will not be visible in the container.
Moreover your CMD seems not to work because of path relativeness. You should change your CMD to CMD [ "python","/usr/apps/test.py" ] or use WORKDIR option - WORKDIR /usr/apps/ so your python command could be executed from this directory and script could be visible there.
More information about differences between volumes and bind mounts can be found in docker documentation.
I'm trying out Docker these days and I want to run create virtual environments in Python in Docker. I downloaded Miniconda3 from docker hub and tested out with basic hello world program written in python.
I ran:
docker run -i-t continuumio/miniconda3 /bin/bash
Then on another terminal I ran:
docker exec laughing_wing "python ~/Documents/Test/hello_world.py"
Where the name of docker container is laughing_wing, and my hello_world.py is in Documents/Test directory.
But running the second command I get:
"OCI runtime exec failed: exec failed: container_linux.go:344:
starting container process caused "exec: \"python
~/Documents/Test/hello_world.py\": stat python
~/Documents/Test/hello_world.py: no such file or directory": unknown"
I'm confused about this.
Looks like you're trying to have the docker container run a python file from your machine. The docker container is isolated from it's host, so you need to either create your own Docker image where you add the file, or mount the ~/Documents/Test directory to your docker container. Something like this:
docker run -it -v ~/Documents/Test:/Test continuumio/miniconda3 /bin/bash
docker exec *container_name* "python /Test/hello_world.py"
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.