I am using docker sdk for python. How do I pass a file to container using exec_run function.
I want to replicate the following docker exec command:
docker exec -i -u postgres <Insert the id find above> pg_restore -C -d postgres < filename
Above command loads a postgres backup. filename is the name of the file and resides on host machine from which exec command is being run.
I am trying this:
containers[0].exec_run("/bin/bash -c 'pg_restore -C -d postgres <'" + filename, stdout=True, stderr=True, user='postgres')
print(exec_log[1])
Here the file resides inside another docker container in which a python application is running which uses python docker client.
I am getting this:
b'/bin/bash: 2019-04-29-postgres_db.dump: No such file or directory\n'
I have looked into put_archive but that would require extracting the file inside the container. Is there a way of doing this using exec_run or any other simpler way?
Thanks
As a work around, you can mount a volume in your docker image that contain that file. Then you can use it from there.
container = context.client.containers.run(
image="ipostgres",
auto_remove=True,
detach=True,
volumes={"/host/machine/store: {'bind': '/opt/whatever', 'mode': 'ro'}, },
)
Then
exec_run("/bin/bash -c 'pg_restore -C -d postgres < /opt/whatever/filename'", stdout=True, stderr=True, user='postgres')
Related
Im new to docker.
I am starting the run command with a script called r, which has the following code
proxy="--build-arg http_proxy=http://wwwcache.open.ac.uk:80 --build-arg https_proxy=http://wwwcache.open.ac.uk:80"
if [ "$http_proxy" == "" ]; then
proxy=
fi
docker build $proxy -t bi-tbcnn docker
docker run -v $(pwd):/e -w /e --entrypoint bash --rm -it bi-tbcnn -c ./run
When I execute r I am getting the following error
bash: ./run: No such file or directory
but when I directly execute the ./run command on my terminal is ok
I use Docker Toolbox on windows
The project address is https://github.com/bdqnghi/bi-tbcnn
thanks
This is a known issue on docker for windows
https://blogs.msdn.microsoft.com/stevelasker/2016/09/22/running-scripts-in-a-docker-container-from-windows-cr-or-crlf/
it seems you're facing an issue with Carriage Return(CR) and Line Feeds(LF) characters, maybe your code editor is changing the newline format automatically
can you to try open a bash session on the container and execute the script manually?
docker run -v $(pwd):/e -w /e --entrypoint bash --rm -it bi-tbcnn
root#a83fcd779f8e:/e# ./run
Please paste the output here
Such as a python file
example.py:
import os
containerId = "XXX"
command = "docker exec -ti " + containerId + "sh"
os.system(command)
when I execute this file using "python example.py", I can enter a docker container, but I want to execute some other commands inside the docker.
I tried this:
import os
containerId = "XXX"
command = "docker exec -ti " + containerId + "sh"
os.system(command)
os.system("ps")
but ps is only executed outside docker after I exit the docker container,it can not be executed inside docker.
so my question is how can I execute commands inside a docker container using the python shell.
By the way, I am using python2.7. Thanks a lot.
If the commands you would like to execute can be defined in advance easily, then you can attach them to a docker run command like this:
docker run --rm ubuntu:18.04 /bin/sh -c "ps"
Now if you already have a running container e.g.
docker run -it --rm ubuntu:18.04 /bin/bash
Then you can do the same thing with docker exec:
docker exec ${CONTAINER_ID} /bin/sh -c "ps"
Now, in python this would probably look something like this:
import os
containerId = "XXX"
in_docker_command = "ps"
command = 'docker exec ' + containerId + ' /bin/sh -c "' + in_docker_command + '"'
os.system(command)
This solution is useful, if you do not want to install an external dependency such as docker-py as suggested by #Szczad
I've been searching a lot for the past days reagarding Dockerfile. I'm using cx_Oracle in python 2.7. Here's how my Dockerfile looks like:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_distance.py /code/app/app.py
COPY generate_values.py /code/app/app2.py
To make it easier to explain, I've made a method to print out the name of the file. In generate_distance.py:
def test():
print "Generate distance"
test()
In generate_values.py:
def test():
print "Generate values"
test()
Then I'm running docker build with a tag:
docker build -t gen .
Sending build context to Docker daemon 13.82kB
Step 1/4 : FROM sbanal/python-oracle-xe12.1-latest
---> 723335924016
Step 2/4 : WORKDIR /code/app
---> Using cache
---> 9fde6fb3ac02
Step 3/4 : COPY generate_distance.py /code/app/app.py
---> 1dbf7ef85ee3
Removing intermediate container ae626dcef48c
Step 4/4 : COPY generate_values.py /code/app/app2.py
---> 7a54500b88a3
Removing intermediate container f496edfc237d
Successfully built 7a54500b88a3
Successfully tagged gen:latest
When running 'docker images', I can see the 'gen' image. But when I run the 'gen' image, only app.py is working:
>docker run -p 5500:5000 gen
>Generate distance
I can't see what mistake I've done. I also don't know why it has to be called app.py. If I use different file name during COPY in Dockerfile, I get 'No such file or directory' error. That is:
FROM sbanal/python-oracle-xe12.1-latest
WORKDIR /code/app
COPY generate_relation_distance.py /code/app/generate_relation_distance.py
COPY generate_ten_values.py /code/app/generate_ten_values.py
Build and run like the part over:
docker run -p 5500:5000 gen
python: can't open file 'app.py': [Errno 2] No such file or directory
Hope someone can help me :)
Your image is based on sbanal/python-oracle-xe12.1-latest (first line of your Dockerfile).
In this Dockerfile is a "CMD" defined, which specifies the first command of your container. Here, that is
CMD python app.py
(see last line of your base image).
The command will be executed as sh -c "python app.py".
This is why your Dockerfile starts on container creation python app.py
You need to override the "CMD" part in your Dockerfile, e.g.
CMD ["python", "app2.py"]
See the official docker docs to understand CMD.
You should only have one CMD in your Dockerfile containing the first command, which is automatically executed by the container.
If you want to start multiple services, you should consider, if this should really be packed into one image. Or you follow the official docs and consider using a supervisor or a script, which starts your desired services.
If anyone wondering how I solved the problem, I used bash script instead.
script.sh:
#Build the image
docker build -t image_name .
#Run image image_name in container
docker run -d -p 550:500 image_name tail -f /dev/null
#Get the container id
container_id="$(docker ps | grep $IMAGE_NAME | grep -Eo '^[^ ]+')"
#Run your program in container
docker exec -it container_id /bin/sh -c "python generate_distance.py"
docker exec -it container_id /bin/sh -c "python generate_values.py"
My goal was to store the output in text files and copy those from docker container to localhost.
Let me clarify what I want to do.
I have a python script in my local machine that performs a lot of stuff and in certain point it have to call another python script that must be executed into a docker container. Such script have some input arguments and it returns some results.
So i want to figure out how to do that.
Example:
def function()
do stuff
.
.
.
do more stuff
''' call another local script that must be executed into a docker'''
result = execute_python_script_into_a_docker(python script arguments)
The docker has been launched in a terminal as:
docker run -it -p 8888:8888 my_docker
You can add your file inside docker container thanks to -v option.
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
And execute your python inside your docker with :
py /myFile.py
or with the host:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker py /myFile.py
And even if your docker is already running
docker exec -ti docker_name py /myFile.py
docker_name is available after a docker ps command.
Or you can specify name in the run command like:
docker run -it --name docker_name -v myFile.py:/myFile.py -p 8888:8888 my_docker
It's like:
-v absoluteHostPath:absoluteRemotePath
You can specify folder too in the same way:
-v myFolder:/customPath/myFolder
More details at docker documentation.
You can use docker's python SDK library. First you need to move your script there, I recommend you do it when you create the container or when you start it as Callmemath mentioned:
docker run -it -v myFile.py:/myFile.py -p 8888:8888 my_docker
Then to run the script using the library:
...
client = docker.client.from_env()
container = client.containers.get(CONTAINER_ID)
exit_code, output = container.exec_run("python your_script.py script_args")
...
you have to use docker exec -it image_name python /filename
Note: To use 'docker exec' you must run the container using docker run
I have logged into the docker from the below command, now from the python script i want to copy the file from docker to host system how to do this
sudo docker run -ti video:new /bin/bash
import os
os.system('cp /tmp/a.txt HOST:/tmp/a.txt')
Map a volume to share data with your host from the container.
docker run -v /tmp/:/tmp/ -ti video:new /bin/bash
Then let your python script copy the file to the /tmp directory inside your container.
import os
os.system('cp /path/to/a.txt /tmp/a.txt')
Through to the -v mapping, the file is placed on the docker host in the directory /tmp. Once you close your docker container, the file will still exist on the host as /tmp/a.txt.
The container can't copy information outside its isolation. If you wanna share information between container and host, please use volume mapper to do that (-v):
https://docs.docker.com/userguide/dockervolumes/