hope you can help. I need, in my Python script, to run the software container Docker with a specific image (Fenics in my case) and then to pass him a command to execute a script.
I've tried with subprocess:
cmd1 = 'docker exec -ti -u fenics name_of_my_container /bin/bash -l'
cmd2 = 'python2 shared/script_to_be_executed.py'
process = subprocess.Popen(shlex.split(cmd1),
stdout=subprocess.PIPE,stdin=subprocess.PIPE, stderr =
subprocess.PIPE)
process.stdin.write(cmd2)
print(first_process.stdout.read())
But it doesn't do anything. Suggestions?
Drop the -it flags in your call do docker, you don't want them. Also, don't try to send the command to execute into the container via stdin, but just pass the command to run in your call do docker exec.
I don't have a container running, so I'll use docker run instead, but the code below should give you a clue:
import subprocess
cmd = 'docker run python:3.6.4-jessie python -c print("hello")'.split()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This will run python -c print("hello") in the container and capture the output, so the Python (3.6) script will itself print
b'hello\n'
It will also work in Python 2.7, I don't know which version you're using on the host machine :)
Regarding communicating with a subprocess, see the official docs subprocess.Popen.communicate. Since Python 3.5 there's also subprocess.run, which makes your life even easier.
HTH!
You can use subprocess to call Fenics as an application, section 4.4 here.
docker run --rm -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable "python3 my-code.py"
Related
I'm sure this is something simple, but I'm trying several settings and I just can't seem to get this to work.
I have the following code:
import subprocess
p = subprocess.Popen('mkdir -p /backups/my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
This is running in a flask application on nginx and python 3
When this executes I'm getting the following error:
/bin/sh: 1: mkdir: not found
I've tried with shell=False, I've tried with Popen(['mkdir', ...]), and I've tried subprocess.run like this question/answer
If I run with shell=False, I get the following error:
Error: [Errno 2] No such file or directory: 'mkdir -p
/backups/my_folder': 'mkdir -p /backups/my_folder'
When I do /bin/mkdir, it works. But, there are other commands which call sub commands that fail (tar calling gz for instance)
What am I missing to get this to work?
Running:
Debian 9.8, Nginx 1.14.0, Python 3.6.8
EDIT
I need this to work for other commands as well. I know I can use os.makedirs, but I have several different commands I will be executing (rsync, ssh, tar, and more)
For these simple commands, try to use python instead of invoking the shell - it makes you more independent of the environment:
os.makedirs('/backups/my_folder', exist_ok=True)
I found the problem.
I realized that my /etc/systemd/system/site.service uWSGI settings had a hard coded path:
Environment = /usr/local/bin
Once, I changed this to include /bin, all my subprocess commands executed just fine.
import subprocess
p = subprocess.Popen('mkdir -p my_folder', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
(result, error) = p.communicate()
print(result)
this is for only windows 10.
I am using Paramiko to test docker commands from an external system (I need to do this I can't just build the container and test it locally) and the test case that I am trying to run involves starting up Apache Spark and running one of the examples, specifically SparkPi. For some reason my python script hangs on the docker exec ... command below. However, previously perform other docker execs and have not had a problem running everything manually. It only breaks when I put everything in the script.
Command:
stdin, stdout, stderr = ssh_client.exec_command(f'docker exec {spark_container_id} bash -c \'"$SPARK_HOME"/bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master spark://$(hostname):7077 "$SPARK_HOME"/examples/jars/spark-examples_2.11-2.1.1.jar {self.slices_to_calculate}\'')
print("\nstdout is:\n" + stdout.read() + "\nstderr is:\n" + stderr.read())
Any idea what could be causing this? And why?
Found out that the reason for this is because I didn't have the get_pty=True parameter for exec_command. It must be the case that by attaching a terminal to the spark-submit command the output gets printed properly. So the solution to this would be
stdin, stdout, stderr = ssh_client.exec_command(f'docker exec -t {spark_container_id} bash -c \'"$SPARK_HOME"/bin/spark-submit ...', get_pty=True)
NOTE: By using get_pty=True the stdout and stderr of the exec_command get combined.
disclaimer - I know there is a package pypsexec for this i'm asking why this happens and how to solve it.
the command
psexec -s -i -d \\<PC-NAME> -u <UserName> -p <Password> <Command>
works perfectly when typed manually at powershell
however when I tried mimicking this with python as
from subprocess import Popen,PIPE
p = Popen("""psexec -s -i -d \\<PC-NAME> -u <UserName> -p <Password>
<Command>""", stdin=PIPE, stdout=PIPE, shell= True )
stdout, stderr = p.communicate()
print(stdout, stderr)
I get the following:
'psexec' is not recognized as an internal or external command,
operable program or batch file.
b'' None
any idea why? psexec is configured at variable path and as i said works from cmd/powershell same error for pskill etc.
Solved - read the comments
Move psexec.exe to C:\windows\SysWOW64. 32-bit python reads from there
If you are interested, I made a package for PsExec:
You can perform a lot of fun operations with it.
Project here
Please check it out :)
I'll try to explain this as simply as possible.
I have a dockerised python app. Within this python app at some point I try to run a docker command in another (libreoffice) container as such:
import subprocess
file_path = 'path_to_file'
args = ['docker', 'run', '-it', '-v', '/tmp:/tmp',
'lcrea/libreoffice-headless', '--headless', '--convert-to', 'pdf', file_path,
'--outdir', '/tmp']
process = subprocess.run(args,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
timeout=timeout)
I end my python app's Dockerfile with a command which starts the server:
CMD python3 -m app.run_app
What is interesting is when I start the python app like this it works fine:
docker-compose run -p 9090:9090 backend /bin/bash
root#74430c3f1f0c:/src python3 -m app.run_app
But when I start it just using docker-compose up, the libreoffice container is never called. I am sure of it because when I do docker ps -a in the first instance a libreoffice container has been created while in the second there is none.
What is going on here?
I found the error. I was passing in the -it option which was failing the process because of the input device is not a TTY. All I had to do was take it out...
I can't seem to figure out how to enable async i/o with a container shell session using docker-py SDK. What I am essentially trying to achieve is to have a working equivalent of docker exec -it bash $container_id in docker-py.
Obviously, stdout poses no problems. It's just that there is no (glaringly obvious) way to actually write to stdin to interact with the running container's shell. Is that really so?
cmd = "bash"
cli = docker.DockerClient()
cli.containers.get(container_id)
socket = cli.exec_run(cmd, stdin=True, socket=True)
socket.writable() # => False
I also tried running 'bin/bash -c "export TERM=xterm; exec bash" as a cmd and adding tty flag to exec_run. Needless to say, to no avail.
Am I doing something wrong?