I can't seem to figure out how to enable async i/o with a container shell session using docker-py SDK. What I am essentially trying to achieve is to have a working equivalent of docker exec -it bash $container_id in docker-py.
Obviously, stdout poses no problems. It's just that there is no (glaringly obvious) way to actually write to stdin to interact with the running container's shell. Is that really so?
cmd = "bash"
cli = docker.DockerClient()
cli.containers.get(container_id)
socket = cli.exec_run(cmd, stdin=True, socket=True)
socket.writable() # => False
I also tried running 'bin/bash -c "export TERM=xterm; exec bash" as a cmd and adding tty flag to exec_run. Needless to say, to no avail.
Am I doing something wrong?
Related
when I run "docker exec -it docker-name bash" on centOS7 service ,it will go into docker container and can run " python xx.py config.yaml " to execute some works .
but if I use Jenkins shell run "docker exec -it docker-name bash" ,it will have no response ,I write "python xx.py config.yaml " behind ,Jenkins show [ python: can't open file 'xxx.py': [Errno 2] No such file or directory ] ,I think this error is not into the docker container ,so can't find the python file that in the docker container .How can I enter the docker container with Jenkins shell .
When you run docker exec -it docker-name bash, you get an interactive shell inside the container that gets connected to your console and the next command you type to the console is executed in that shell.
But Jenkins has no console. It is executing a script, with the standard input connected to a null device (which always returns end of file on read). So in effect it is executing the equivalent of
docker exec -it docker-name bash </dev/null (the /dev/null is the null device and < connects it to standard input of the command). And if you do that on your console, nothing happens and you'll get your original prompt again.
But you don't have to, and shouldn't be, running bash in this case at all. You give docker exec the command you want to run in the container and it runs it there. So you just do
docker exec -i docker-name python xx.py config.yaml
and that runs the python command, prints any output and when the command ends, disconnects from the container again.
I've omitted the -t because that instructs docker to use the terminal (console), but Jenkins does not have any console, just the -i, instructing it to connect the stdin, stdout and stderr, is good enough.
Now there is also a way to send the commands on the standard input of the bash similar to what the console would do, but I strongly recommend reading the documentation of bash before attempting that.
Overview
I'm trying to use python fabric to run an ssh command as root on a remote server.
The command: nohup ./foo &
foo is expected to command run for several days. I must be able to disassociate foo from fabric's remote ssh session, and put foo in the background.
The Fabric FAQ says you should use something like screen or tmux when you run your fabric script (which runs the backgrounded command). I tried that, but my fabric script still hung. foo is not hanging.
Question
How do I use fabric to run this command on a remote server without the script hanging: nohup ./foo &
Details
This is my script:
#!/bin/sh
# Credit: https://unix.stackexchange.com/a/20895/6766
if "true" : '''\'
then
exec "/nfs/it/network_python/$OSREL/bin/python" "$0" "$#"
exit 127
fi
'''
from getpass import getpass
import os
from fabric import Connection, Config
assert os.geteuid()==0, "ERROR: Must run as root"
for host in ['host1.foo.local', 'host2.foo.local']:
# Make an ssh connection to the host...
conn = Connection(host)
# The script always hangs at this line
result = conn.run('nohup ./foo &', warn=True, hide=True)
I always open a tmux session to run the aforementioned script in; even doing so, the script hangs when I get to conn.run(), above.
I'm running the script on a vanilla CentOS 6.5 VM; it runs under python 2.7.10 and fabric 2.1.
The Fabric FAQ is unclear... I thought the FAQ wanted tmux used on the local side when I executed the Fabric script.
The correct way to fix this problem is to replace nohup in the remote command, with screen -d -m <command>. Now I can run the whole script locally with no hangs (and I don't have to use tmux in the local term).
Explicitly, I have to rewrite the last line of my script in my question as:
# Remove &, and nohup...
result = conn.run('screen -d -m ./foo', warn=True, hide=True)
hope you can help. I need, in my Python script, to run the software container Docker with a specific image (Fenics in my case) and then to pass him a command to execute a script.
I've tried with subprocess:
cmd1 = 'docker exec -ti -u fenics name_of_my_container /bin/bash -l'
cmd2 = 'python2 shared/script_to_be_executed.py'
process = subprocess.Popen(shlex.split(cmd1),
stdout=subprocess.PIPE,stdin=subprocess.PIPE, stderr =
subprocess.PIPE)
process.stdin.write(cmd2)
print(first_process.stdout.read())
But it doesn't do anything. Suggestions?
Drop the -it flags in your call do docker, you don't want them. Also, don't try to send the command to execute into the container via stdin, but just pass the command to run in your call do docker exec.
I don't have a container running, so I'll use docker run instead, but the code below should give you a clue:
import subprocess
cmd = 'docker run python:3.6.4-jessie python -c print("hello")'.split()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This will run python -c print("hello") in the container and capture the output, so the Python (3.6) script will itself print
b'hello\n'
It will also work in Python 2.7, I don't know which version you're using on the host machine :)
Regarding communicating with a subprocess, see the official docs subprocess.Popen.communicate. Since Python 3.5 there's also subprocess.run, which makes your life even easier.
HTH!
You can use subprocess to call Fenics as an application, section 4.4 here.
docker run --rm -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable "python3 my-code.py"
I'm using python fabric to deploy binaries to an ec2 server and am attempting to run them in background (a subshell).
All the fabric commands for performing local actions, putting files, and executing remote commands w/o elevated privileges work fine. The issue I run into is when I attempt to run the binary.
with cd("deploy"):
run('mkdir log')
sudo('iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080', user="root")
result = sudo('./dbserver &', user="root") # <---- This line
print result
if result.failed:
print "Running dbserver failed"
else:
print "DBServer now running server" # this gets printed despite the binary not running
After I login to the server and ps aux | grep dbserver nothing shows up. How can I get fabric to execute the binary? The same command ./dbserver & executed from the shell does exactly what I want it to. Thanks.
This is likey reated to TTY issues, and/or that you're attempting to background a process.
Both of these are discussed in the FAQ under these two headings:
http://www.fabfile.org/faq.html#init-scripts-don-t-work
http://www.fabfile.org/faq.html#why-can-t-i-run-programs-in-the-background-with-it-makes-fabric-hang
Try making the sudo like this:
sudo('nohup ./dbserver &', user=root, pty=False)
I use python module pysftp to connect to remote server. Below you can see python code :
import pysftp
import sys
import sqr_common
srv = pysftp.Connection(host="xxxxxx", username="xxxx",
password="xxxxx")
command = "/usr/bin/bash"
command2="APSHOME=/all/aps/msc_2012; export APSHOME; "
srv.execute(command)
srv.execute(command2)
srv.close()
Problem is that command /usr/bin/bash is an infinite process , so my script will never be executed. Can anyone help me how to choose shell on remote server for example bash and execute command in bash on remote server?? Is there any pysftp function that allows me chosing shell??
try this
/usr/bin/bash -c "APSHOME=/all/aps/msc_2012; export APSHOME; "
This problem is not specific to Python, but more like how to execute commands under specific shell.
If you need to run only single command you can run using bash -c switch
bash -c "echo 123"
You can run multiple commands ; separated
bash -c "echo 123 ; echo 246"
If you need to many commands under a specific shell, remotely create a shell script file (.bash file) an execute it
bash myscript.bash