Paramiko exec_command hangs on docker exec - python

I am using Paramiko to test docker commands from an external system (I need to do this I can't just build the container and test it locally) and the test case that I am trying to run involves starting up Apache Spark and running one of the examples, specifically SparkPi. For some reason my python script hangs on the docker exec ... command below. However, previously perform other docker execs and have not had a problem running everything manually. It only breaks when I put everything in the script.
Command:
stdin, stdout, stderr = ssh_client.exec_command(f'docker exec {spark_container_id} bash -c \'"$SPARK_HOME"/bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master spark://$(hostname):7077 "$SPARK_HOME"/examples/jars/spark-examples_2.11-2.1.1.jar {self.slices_to_calculate}\'')
print("\nstdout is:\n" + stdout.read() + "\nstderr is:\n" + stderr.read())
Any idea what could be causing this? And why?

Found out that the reason for this is because I didn't have the get_pty=True parameter for exec_command. It must be the case that by attaching a terminal to the spark-submit command the output gets printed properly. So the solution to this would be
stdin, stdout, stderr = ssh_client.exec_command(f'docker exec -t {spark_container_id} bash -c \'"$SPARK_HOME"/bin/spark-submit ...', get_pty=True)
NOTE: By using get_pty=True the stdout and stderr of the exec_command get combined.

Related

Run rest api calls on a remote server using python [duplicate]

I am trying to run my local bash script on remote server without copying it into remote server. It is as simple as following for test purpose. There are more than a few servers where it runs perfectly, but in some server running tcsh, there is an issue. How do I invoke bash, if following does not work. Below is dummy test.sh
#!/bin/bash
a=test
echo $a
echo $SHELL
I am using Python Paramiko exec_command for remote execution as following:
my_script = open("test.sh").read()
stdin, stdout, stderr = ssh.exec_command(my_script, timeout=15)
print(stdout.read().decode())
err = stderr.read().decode()
if err:
print(err)
Given, that connection works and same script works for other servers with bash default shell.
This is the output that i get:
/bin/tcsh
printing from errors
a=test: Command not found.
a: Undefined variable.
The #!/bin/bash is a comment. Sending it to a remote shell as a command has no effect.
You have to execute /bin/bash on the server and send your script to it:
stdin, stdout, stderr = ssh.exec_command("/bin/bash", timeout=15)
stdin.write(my_script)
Also, you have to exit the shell at the end of your script, otherwise it will never end.
Related question:
Pass arguments to a bash script stored locally and needs to be executed on a remote machine using Python Paramiko

Execute local script on remote server using non-default shell with Python Paramiko

I am trying to run my local bash script on remote server without copying it into remote server. It is as simple as following for test purpose. There are more than a few servers where it runs perfectly, but in some server running tcsh, there is an issue. How do I invoke bash, if following does not work. Below is dummy test.sh
#!/bin/bash
a=test
echo $a
echo $SHELL
I am using Python Paramiko exec_command for remote execution as following:
my_script = open("test.sh").read()
stdin, stdout, stderr = ssh.exec_command(my_script, timeout=15)
print(stdout.read().decode())
err = stderr.read().decode()
if err:
print(err)
Given, that connection works and same script works for other servers with bash default shell.
This is the output that i get:
/bin/tcsh
printing from errors
a=test: Command not found.
a: Undefined variable.
The #!/bin/bash is a comment. Sending it to a remote shell as a command has no effect.
You have to execute /bin/bash on the server and send your script to it:
stdin, stdout, stderr = ssh.exec_command("/bin/bash", timeout=15)
stdin.write(my_script)
Also, you have to exit the shell at the end of your script, otherwise it will never end.
Related question:
Pass arguments to a bash script stored locally and needs to be executed on a remote machine using Python Paramiko

psexec commands works perfectly from powershell but not recognized by python

disclaimer - I know there is a package pypsexec for this i'm asking why this happens and how to solve it.
the command
psexec -s -i -d \\<PC-NAME> -u <UserName> -p <Password> <Command>
works perfectly when typed manually at powershell
however when I tried mimicking this with python as
from subprocess import Popen,PIPE
p = Popen("""psexec -s -i -d \\<PC-NAME> -u <UserName> -p <Password>
<Command>""", stdin=PIPE, stdout=PIPE, shell= True )
stdout, stderr = p.communicate()
print(stdout, stderr)
I get the following:
'psexec' is not recognized as an internal or external command,
operable program or batch file.
b'' None
any idea why? psexec is configured at variable path and as i said works from cmd/powershell same error for pskill etc.
Solved - read the comments
Move psexec.exe to C:\windows\SysWOW64. 32-bit python reads from there
If you are interested, I made a package for PsExec:
You can perform a lot of fun operations with it.
Project here
Please check it out :)

Run sequential commands in Python with subprocess

hope you can help. I need, in my Python script, to run the software container Docker with a specific image (Fenics in my case) and then to pass him a command to execute a script.
I've tried with subprocess:
cmd1 = 'docker exec -ti -u fenics name_of_my_container /bin/bash -l'
cmd2 = 'python2 shared/script_to_be_executed.py'
process = subprocess.Popen(shlex.split(cmd1),
stdout=subprocess.PIPE,stdin=subprocess.PIPE, stderr =
subprocess.PIPE)
process.stdin.write(cmd2)
print(first_process.stdout.read())
But it doesn't do anything. Suggestions?
Drop the -it flags in your call do docker, you don't want them. Also, don't try to send the command to execute into the container via stdin, but just pass the command to run in your call do docker exec.
I don't have a container running, so I'll use docker run instead, but the code below should give you a clue:
import subprocess
cmd = 'docker run python:3.6.4-jessie python -c print("hello")'.split()
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
out, err = p.communicate()
print(out)
This will run python -c print("hello") in the container and capture the output, so the Python (3.6) script will itself print
b'hello\n'
It will also work in Python 2.7, I don't know which version you're using on the host machine :)
Regarding communicating with a subprocess, see the official docs subprocess.Popen.communicate. Since Python 3.5 there's also subprocess.run, which makes your life even easier.
HTH!
You can use subprocess to call Fenics as an application, section 4.4 here.
docker run --rm -v $(pwd):/home/fenics/shared -w /home/fenics/shared quay.io/fenicsproject/stable "python3 my-code.py"

python subprocess output on nohup

Trying to monitor the available physical disc space of a remote machine using a python script, which executes the df -h . command using subprocess.popen.
import subprocess
import time
command = 'ssh remoteserver "df -h ."'
while True:
proc = subprocess.Popen(command,shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
output,err=proc.communicate()
print output
print err
time.sleep(60)
The script runs fine and prints the output to the terminal when run from command line
$> python2.7 script.py
Filesystem Size Used Avail Use% Mounted on
remoteserver:/home/user
555G 447G 109G 81% /home
The scripts does not produce any output or seems to be blocking when the script is started with nohup command.
$> nohup python2.7 script.py &
Would like the script to work and fetch the disc space of remote machine using the above script when started in nohup.
I'm not 100% sure of the underlying issue here, but when you invoke NOHUP in the shell, it's disconnected some of the STDIN/STDOUT from the terminal process, which I suspect it causing some of this interactions you're seeing.
Given that you're doing this from a remote machine, I'd actually recommend you look at using something like Fabric as a library to do what you're after. It's pretty straightforward, and does most of the handling of terminal sessions as well as closing things down nicely for you when you're complete.
something like:
from fabric import api
from fabric.api import env
import fabric
env.host_string = '%s#%s' % (username, remote_host)
env.disable_known_hosts = True
env.password = password
fabric.state.output['stdout'] = False
fabric.state.output['stderr'] = False
results = api.run('df -h')
You might try sending stdin=subprocess.PIPE to the subprocess command, then calling proc.stdin.close() on the next line, before the communicate() call. Or you can try changing the command to 'ssh remoteserver "df -h ." </dev/null'. Others report using FNULL = open(os.devnull, 'r') and passing in FNULL to the stdin= argument, but I'm not sure if you need to call FNULL.close() after or not.
SSH is most likely waiting for input for some reason when it is run from nohup. Perhaps it is unable to authenticate in the nohup environment and is asking for password input?
To make sure SSH is not waiting for input, try adding -o "BatchMode yes" to the ssh command and see if there are some clues in the output/error from the subprocess communicate call.

Categories