I have a Python method code using docker and I try to understand it. The method is here,
def exec(self, container_target, command, additional_options=""):
""" execte docker exec commmand and return the stdout or None when error"""
cmd = """docker exec -i "%s" sh -c '%s' %s""" % (
container_target, command, additional_options)
if self.verbose:
print(cmd)
try:
cp = subprocess.run(cmd,
shell=True,
check=True,
stdout=subprocess.PIPE)
return cp.stdout.decode("utf-8").strip()
except Exception as e:
print(f"Docker exec failed command {e}")
return None
I get the screenshot at the time of debugging,
The cmd value is found,
'docker exec -i "craft_p2-2" sh -c \'cd craft && composer show
--name-only | grep nerds-and-company/schematic | wc -l\' '
My understanding is the code using the shell of the container named craft_p2-2 and enters a folder named craft. Then, it checks if the Schematic plugin is installed. Is that correct?
This might be obvious for some, but, I don't come with a wealth of container knowledge and need to be sure of what's going on.
Related
I created a custom command in django and I want to use a docker-compose command in it. I use a subprocess as follow:
class Command(BaseCommand):
def handle(self, *args, **options):
data = open_file()
os.environ['DATA'] = data
command_name = ["docker-compose ", "-f", "docker-compose.admin.yml", "up"]
popen = subprocess.Popen(command_name, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True)
return popen
when I do it I get a FileNotFoundError:
FileNotFoundError: [Errno 2] No such file or directory: 'docker-compose ': 'docker-compose
Is it even possible to use docker-compose inside of a command ?
It feels like I am missing something.
Thank you !
I see 2 possible things.
Your docker compose should run in background, so, you should add -d option at the end of the command: docker-compose -f docker-compose.admin.yml up -d
Best practice is start docker compose in background and you can take output with popen executing docker-compose -f docker-compose.admin.yml logs
You can also run docker-compose services and get interactive output, defining stdin_open: true in your yml file.
You could check if your current directory is where docker-compose.admin.yml exists, printing os.getcwd() and comparing it to docker-compose.admin.yml path.
I would like to run the following command using python subprocess.
docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue": []'
If I run using subprocess.call() - it is working. But I am not able to check the return value
s1="docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep \'cex4queue\": []\'"
p1 = subprocess.call(s1,shell=True)
Same command with subprocess.run is not working.
I want to check whether that string present or not. How can I check?
I would recommend the use of subprocess.Popem:
import subprocess as sb
process = sb.Popen("docker run --rm -it -v $(pwd):/grep11-cli/config ibmzcontainers/hpvs-cli-installer:1.2.0.1.s390x crypto list | grep 'cex4queue\": []'".split(), stdout=sb.PIPE, stderror=sb.PIPE)
output, errors = process.communicate()
print('The output is: {}\n\nThe errors were: {}'.format(output, errors))
hope you can help. I need, in my Python script, to run the software container Docker with a specific image (Fenics in my case) and then to pass him a command to execute a script.
I've tried with subprocess:
cmd1 = 'docker exec -ti -u fenics name_of_my_container /bin/bash -l'
cmd2 = 'python2 shared/script_to_be_executed.py'
process = subprocess.Popen(shlex.split(cmd1),
stdout=subprocess.PIPE,stdin=subprocess.PIPE, stderr =
subprocess.PIPE)
process.stdin.write(cmd2)
print(first_process.stdout.read())
But it doesn't do anything. Suggestions?
Drop the -it flags in your call do docker, you don't want them. Also, don't try to send the command to execute into the container via stdin, but just pass the command to run in your call do docker exec.
I don't have a container running, so I'll use docker run instead, but the code below should give you a clue:
import subprocess
cmd = 'docker run python:3.6.4-jessie python -c print("hello")'.split()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This will run python -c print("hello") in the container and capture the output, so the Python (3.6) script will itself print
b'hello\n'
It will also work in Python 2.7, I don't know which version you're using on the host machine :)
Regarding communicating with a subprocess, see the official docs subprocess.Popen.communicate. Since Python 3.5 there's also subprocess.run, which makes your life even easier.
HTH!
You can use subprocess to call Fenics as an application, section 4.4 here.
docker run --rm -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable "python3 my-code.py"
I'm trying to run the following;
def conn(ad_group):
result = Popen(["sudo -S /opt/quest/bin/vastool", "-u host/ attrs 'AD_GROUP_NAME' | grep member"], stdout=PIPE)
return result.stdout
on a RedHat machine in a python script but I'm getting FileNotFoundError: [Errno 2] No such file or directory: 'sudo -S /opt/quest/bin/vastool'
I can run the command(sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member) at the command line without a problem.
I'm sure I've messed up something in the function but I need an other set of eyes.
Thank you
You need to make the entire command a single string, and use the shell=True option because you're using a shell pipeline.
result = Popen("sudo -S /opt/quest/bin/vastool -u host/ attrs 'AD_GROUP_NAME' | grep member", stdout=PIPE, shell=True)
I can't seem to get Fabric to play nice with backgrounding a process that I've used nohup on. . . It should be possible, given various pieces of information, including here and here.
def test():
h = 'xxxxx.compute-1.amazonaws.com'
ports = [16646, 9090, 6666]
with settings(host_string = h):
tun_s = "ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem %s#%s " % (env.user, h)
for port in ports:
p_forward = "-L %d:localhost:%d" % (port, port)
tun_s = "%s %s" % (tun_s, p_forward)
tun_s = "%s -N" % tun_s
# create the tunnel. . .
print "creating tunnel %s" % tun_s
run("nohup '%s' >& /dev/null < /dev/null &" % tun_s)
print "fin"
Abbreviated output:
ubuntu#domU-xxx:~/deploy$ fab test
executing on tunnel ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem ubuntu#xxx -L 16646:localhost:16646 -L 9090:localhost:9090 -L 6666:localhost:6666 -N
[xxx.compute-1.amazonaws.com] run: nohup 'ssh -o StrictHostKeyChecking=no -i ~/.ssh/kp.pem ubuntu#xxx.compute-1.amazonaws.com -L 16646:localhost:16646 -L 9090:localhost:9090 -L 6666:localhost:6666 -N' >& /dev/null < /dev/null &
fin
Done.
Disconnecting from xxxx
I know there is no problem with the tunnel command per se because if I strip away the nohup stuff it works fine (but obviously Fabric hangs). I'm pretty sure that it's not properly getting detached and when the run function returns the tunnel process is immediately dying.
But why?
This also happens with a python command in another part of my code.
So, it seems after much wrangling that this is not possible for whatever reason with my setup (default Ubuntu installs on EC2 instances). I have no idea why and as it seems possible according to various sources.
I fixed my particular problem by using Paramiko in place of Fabric, for calls that need to be left running in the background. The following achieves this:
import paramiko
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('xxx.pem')
ssh.connect('xxx.compute-1.amazonaws.com', username='ubuntu', pkey=privkey)
stdin, stdout, stderr = ssh.exec_command("nohup ssh -f -o StrictHostKeyChecking=no -i ~/.ssh/xxx.pem ubuntu#xxx.compute-1.amazonaws.com -L 16646:localhost:16646 -L -N >& /dev/null < /dev/null &")
ssh.close()