I can run a bash script local to my docker client (not local to the docker host or targeted container), without using volumes or copying the script to the container:
docker run debian bash -c "`cat script.sh`"
Q1 How do I do the equivalent on a django container? The following have not worked but my help demonstrate what Im asking for (the bash script printf the python script line with the expaned args):
docker run django shell < `cat script.py`
cat script.py | docker run django shell
Q2 How do I pass arguments to script.py passed to a dockerized managed.py? Again, examples of what does not work (for me):
./script.sh arg1 arg2 | docker run django shell
docker run django shell < echo "$(./script.sh arg1 arg2)"
I think the best way for you is to use custom Dockerfile that uses COPY or ADD command to move whatever scripts you into the container.
As for passing arguments you can either use ENTRYPOINT command in your image, like the example below:
ENTRYPOINT django shell /home/script.sh
Then you can use docker run arg1 arg2 to pass the arguments
This is the link to pass the command line arguments to python: http://www.tutorialspoint.com/python/python_command_line_arguments.htm
eg: python script.py -param1
If the script is already available in the docker you can trigger it using Dockerfile(with passing parameters)
RUN /script.py -param1 <value>
Extra:
Having said that it is always difficult to change the Dockerfile if there are more parameters to be changed frequently.Hence a small shell script can be written as a wrapper to Dockerfile like this:
Dockerwrapper.sh
pass parameters to Dockerfile
dockerbuild --tag <name> .
-
Dockerfile
RUN python script.py -param1 $1
I -------------------------------------------
IF script is not present inside docker
You can copy the script inside and then delete it by using COPY,RUN command...
(Reason: Since docker is an isolated environment, running from outside is not possible (I GUESS..))
Hoped it answered ur question.
All the best
Related
First off, I might have formulated the question inaccurately, feel free to modify if necessary.
Although I am quite new to docker and all its stuff, yet somehow managed to create an image (v2) and a container (cont) on my Win 11 laptop. And I have a demo.py which requires an .mp4 file as an arg.
Now, if I want to run the demo.py file, 1) I go to the project's folder (where demo.py lives), 2) open cmd and 3) run: docker start -i cont. This starts the container as:
root:834b2342e24c:/project#
Then, I should copy 4) my_video.mp4 from local project folder to container's project/data folder (with another cmd) as follows:
docker cp data/my_video.mp4 cont:project/data/.
Then I run: 5) python demo.py data/my_video.mp4. After a while, it makes two files: my_video_demo.mp4 and my_video.json in the data folder in the container. Similarly, I should copy them back to my local project folder: 6)
docker cp cont:project/data/my_video_demo.mp4 data/, docker cp cont:project/data/my_video_demo.json data/
Only then I can go to my local project/data folder and inspect the files.
I want to be able to just run a particular command that does 4) - 6) all in one.
I have read about -v option where, in my case, it would be(?) -v /data:project/data, but I don't know how to proceed.
Is it possible to do what I want? If everything is clear, I hope to get your support. Thank you.
You should indeed use Docker volumes. The following command should do it.
docker run -it -v /data:project/data v2
Well, I think I've come up with a way of dealing with it.
I have learned that with -v one can create a place that is shared between the local host and the container. All you need to do is that run the docker and provide -v as follows:
docker run --gpus all -it -v C:/Workspace/project/data:/project/data v2:latest python demo.py data/my_video_demo.mp4
--gpus - GPU devices to add to the container ('all' to pass all GPUs);
-it - tells the docker that it should open an interactive container instance.
Note that every time you run this, it will create a new container because of -it flag.
Partial credit to #bill27
I have a docker image that exposes 9000 port for server. After the server is running, I need to execute the 3 python scripts which depends on server so, they can only get executed after server.py is running however, after CMD command, the other code do not get executed and remains stuck. What are the possible suggestion to run 3 scripts in same container?
FROM python:3.7.3 as build
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
# CMD [ "python", "./server.py" ] (The following 3 scripts depends on server.py for execution)
RUN python /app/script1.py
RUN python /app/script2.py
RUN python /app/script3.py
EXPOSE 9000
CMD [ "python", "./server.py" ]
As written in the Dockerfile referece
There can only be one CMD instruction in a Dockerfile
The CMD instruction tells the container what its entry point is, and when running the container, that is what will be run.
If running python ./server.py is a blocking call (which I'm assuming it is, since it's called a server, and most likely responds to some kind of requests), then this won't be possible.
Instead, try restructuring your scripts so that they are run when the server is run, by doing everything you do in script1.py, script2.py, script3.py after the server has been started inside of server.py.
If instead this is about script1.py... sending requests to the server, I'd recommend not including those in the container. Instead, you can simply run those scripts, manually, from the terminal while the server container is running.
You can just execute those scripts from the command line using docker exec after the container has started. You'll just need to know what the container name is
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
Or just make a bash script, say my_script.sh to run them all and just execute that
#!/usr/bin/env bash
docker exec <CONTAINER NAME> python /app/script1.py
docker exec <CONTAINER NAME> python /app/script2.py
docker exec <CONTAINER NAME> python /app/script3.py
And then
docker exec <CONTAINER NAME> ./my_script.sh
I have set up Docker Toolbox on a Win 10 machine. I have some simple single file Python scripts that I want to run in Docker, just for learning purpose.
Started learning Docker today, and Python 3 days ago.
I assume I have set up Docker correctly, I can run the example hello-world image. No error messages during setup.
I am following an instruction from here https://runnable.com/docker/python/dockerize-your-python-application,
which says:
If you only need to run a simple script (with a single file), you can avoid writing a complete Dockerfile. In the examples below, assume you store my_script.py in /usr/src/widget_app/, and you want to name the container my-first-python-script:
docker run -it --rm --name my-first-python-script -v "$PWD":/usr/src/widget_app python:3 python my_script.py
If I type pwd, it shows:
/c/Program Files/Docker Toolbox
And the script I want to run is located here:
C:\Docker\Python\my_script.py
This is what I think should work:
docker run -it --rm --name my-first-python-script -v "$PWD":/c/Docker/Python python:3 python my_script.py
No matter how I try to specify the file directory, I get an error:
python: can't open file 'my_script.py': [Errno 2] No such file or directory
When you run -v "$PWD":/c/Docker/Python, you are saying you want to link your current working directory to the path /c/Docker/Python in the container, which isn't what you want to do. What you are trying to do is link C:\Docker\Python\ on your host to the container folder /usr/src/widget_app.
This command will put your script inside the container path /usr/src/widget_app, then run it:
docker run -it --rm --name my-first-python-script -v /c/Docker/Python:/usr/src/widget_app python:3 python /usr/src/widget_app/my_script.py
I have docker image with bash and python scripts inside it:
1) entrypoint.sh (this script runs python file);
2) parser.py
When developers run a container, they can pass env variables with prefix like MYPREFIX_*.
docker run name -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ...
There are more than 100 possible keys, they change from time to time (depending on remote configuration file).
I'd like to pass all variables to the bash script and then to the python script.
I can't define all variables inside Dockerfile (keys can change). I also can't use env_file.
Are there any suggestions?
Content of entrypoint:
/usr/bin/python3 "/var/hocon-parser.py"
/usr/bin/curl -sLo /var/waves.jar "https://github.com/wavesplatform/Waves/releases/download/v$WAVES_VERSION/waves-all-$WAVES_VERSION.jar"
/usr/bin/java -jar /var/waves.jar /waves-config.conf
The problem was in run command. You can't pass env variables after the container name. This command works:
docker run -e MYPREFIX_1=true -e MYPREFIX_DEMO=100 ... name
I am trying to run my python script on docker. I tried different ways to do it but not able to run it on docker. My python script is given below:
import os
print ('hello')
I have already installed docker on my mac. But i want to know how i can make images and then push it to docker after that i wanna pull and run my script on docker itself.
Going by question title, and if one doesn't want to create docker image but just want to run a script using standard python docker images, it can run using below command
docker run -it --rm --name my-running-script -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3.7-alpine python script_to_run.py
Alright, first create a specific project directory for your docker image. For example:
mkdir /home/pi/Desktop/teasr/capturing
Copy your dockerfile and script in there and change the current context to this directory.
cp /home/pi/Desktop/teasr/capturing.py /home/pi/Desktop/teasr/dockerfile /home/pi/Desktop/teasr/capturing/
cd /home/pi/Desktop/teasr/capturing
This is for best practice, as the first thing the docker-engine does on build, is read the whole current context.
Next we'll take a look at your dockerfile. It should look something like this now:
FROM python:latest
WORKDIR /usr/local/bin
COPY capturing.py .
CMD ["capturing.py", "-OPTIONAL_FLAG"]
The next thing you need to do is build it with a smart name. Using dots is generally discouraged.
docker build -t pulkit/capturing:1.0 .
Next thing is to just run the image like you've done.
docker run -ti --name capturing pulkit/capturing:1.0
The script now get executed inside the container and will probably exit upon completion.
Edit after finding the problem that created the following error:
standard_init_linux.go:195: exec user process caused "exec format error"
There's a different architecture beneath raspberry pi's (ARM instead of x86_64), which COULD'VE BEEN the problem, but wasn't. If that would've been the problem, a switch of the parent image to FROM armhf/python would've been enough.
Source
BUT! The error kept occurring.
So the solution to this problem is a simple missing Sha-Bang on top of the python script. The first line in the script needs to be #!/usr/bin/env python and that should solve the problem.
Source
You need to create a dockerfile in the directory your script is in.
You can take this template:
FROM python:latest
COPY scriptname.py /usr/local/share/
CMD ["scriptname.py", "-flag"]
Then simply execute docker build -t pulkit/scriptname:1.0 . and your image should be created.
Your image should be visible under docker images. If you want to execute it on your local computer, use docker run.
If you want it to upload to the DockerHub, you need to log into the DockerHub with docker login, then upload the image with docker push.
I Followed #samprog (most accepted) answer on my machine running on UBUNTU VERSION="14.04.6".
and was getting "standard_init_linux.go:195: exec user process caused "exec format error"
None of the solution worked for me mentioned above.
Fixed the error after changing my Dockerfile as follows
FROM python:latest
COPY capturing.py ./capturing.py
CMD ["python","capturing.py"]
Note: If your script import some other module then you need to modify COPY statement in your Dockerfile as follows - COPY *.py ./
Hope this will be useful for others.
Another way to run python script on docker can be:
copy the local python script to docker:
docker cp yourlocalscript.path container_id:/dst_path/
container id can be found using:
docker ps
run the python script on docker:
docker exec -it python /container_script_path.py
its very simple
1- go to your Python script directory and create a file with this title without any extension
Dockerfile
2-now open the docker file and write your script name instead of sci.py
( content of Dockerfile )
FROM python:slim #i choice slim version you can choose another tag for example python:3
WORKDIR /usr/local/bin
COPY sci.py . #replace you scrip name with sci.py
CMD [ "python", "sci.py" ] #replace you scrip name with sci.py
save it and now you should create image file from this dockerfile and script py
and next run it
3-in path address folder write CMD and press Enter key :
4-When the cmd window opens for you, type in it :
docker build -t my-python-app . #this create image in docker by this title my-python-app
5- and findly run image:
docker run -it --rm --name my-running-app my-python-app
I've encountered this problem recently, this dependency HELL between python2 and python3 got me. Here is the solution.
Bind your current working directory to a Docker container with python2 and pip2 running.
Pull the docker image.
docker pull frolvlad/alpine-python2
Add this alias into /home/user/.zshrc or /home/user/.bashrc
alias python2='docker run -it --rm --name python2 -v "$PWD":"$PWD" -w
"$PWD" frolvlad/alpine-python2'
Once you type python2 into your CMD you'll be thrown into the Docker instance.