Python - Capture exit status of command executed via SSH - python

I need a way to capture exit status from a command run through SSH. Would like the exit status to end up within a variable. I cannot seem to get it working though.
Command would be something simple like:
os.system("ssh -qt hostname 'sudo yum list updates --security > /tmp/yum_update_packagelist.txt';echo $?")
Anyone have an idea? Everything I've tried has either not worked at all, or ended up giving me the exit status of the ssh command, not the underlying command.

You probably want to use an SSH library like paramiko (or spur, Fabric, etc.… just google/PyPI/SO-search for "Python SSH" to see all the options and pick the one that best matches your use case). There are demos included with paramiko that do exactly what you want to do.
If you insist on scripting the command-line ssh tool, you (a) almost certainly want to use subprocess instead of os.system (as the os.system docs explicitly say), and (b) will need to do some bash-scripting (assuming the remote side is running bash) to pass the value back to you (e.g., wrap it in a one-liner script that prints the exit status on stderr).
If you just want to know why your existing code doesn't work, let's take a look at it:
os.system("ssh -qt hostname 'sudo yum list updates --security > /tmp/yum_update_packagelist.txt';echo $?")
First, you're running an ssh command, then a separate echo $? command, which will echo the exit status of ssh. If you wanted to echo the status of the sudo, you need to get the semicolon into the ssh command. (And if you wanted the status of the yum, inside the sudo.)
Second, os.system doesn't look at what gets printed to stdout anyway. As the docs clearly say, "the return value is the exit status of the process". So, you're getting back the exit status of the echo command, which is pretty much guaranteed to be 0.'
So, to make this work, you'd need to get the echo into the right place, and then read the stdout by using subprocess.check_output or similar instead of os.system, and then parse that output to read the last line. If you do all that, it should work. But again, you shouldn't do all that; just use paramiko or another SSH library.

You probably want to use the Fabric api instead of directly calling the ssh executable.
From there, check out Can I catch error codes when using Fabric to run() calls in a remote shell?

Related

How to interact with new shell I logged into from script?

Not able to send commands to shell I logged into
Originally, I wrote a Python script. It was able to send commands like
subprocess.run(['kubectl', 'config', 'get-context'], shell=True)
but when it came time to get to the child shell, in this case bash, the command wouldn't run until I exited that shell and it would say things like it couldn't find the command.
I then tried to do it with the module "sh," but was also unsuccessful
I thought maybe using Python was problem and also realized my ultimate goal was to use a different shell (cypher-shell) and so skipped immediately to that with bash as the parent shell. In there I have a line that is sometimes successful, sometimes not
kubectl run -it --rm cypher-shell --image=gcr.io/cloud-marketplace/neo4j-public/causal-cluster-k8s:3.4 --restart=Never --namespace=default --command -- ./bin/cypher-shell -u neo4j -p "password" -a "domain.name"
But even when it successfully logs in it, it just hangs until I manually exit and then it runs the next commands
Note: I saw this and so, perhaps, it's not a child shell? Run shell command from child shell
I can't say I know exactly what you are doing, but if I understand your objective correctly you want the Python program to continue to log while the script continues to run? The problem is that the logger continues to run and holds up your program. The way I would deal with that would be to run the logger as a background process.
With bash, that would be ./script.sh & which would make it run without holding the rest of the program back from running.
Hopefully that may give you an idea! Good luck.

Run a series of external commands in a Python script

I'm trying to run external commands (note the plural) from a Python script. I've been reading about the subprocess module and use it. It works for me when I have a single or independent commands to run, whether I'm interested in the stdout or not.
What I want to do is a bit different: I want something persistent. Basically the first command I run is to log in an application, then I can run some other commands which only work if I'm logged in. For some of these commands, I need the stdout.
So when I use subprocess to login, it does it, but then the process is killed and next time I run a command with subprocess, I'm not logged in anymore... I just need to run a series of commands, like I would do it in a terminal.
Any idea how to do that?
You can pass in an arbitrarily complex series of commands with shell=True though I would generally advise against doing that, if not least because you are making your Python script platform dependent.
result = subprocess.check_output('''
servers=0
for server in one two three four; do
output=$(printf 'echo moo\necho bar\necho baz\n' | ssh "$server")
case $output in *"hello"*) echo "$output";; esac
echo "$output" | grep -q 'ALERT' && echo "$server: Intrusion detected"
servers=$((servers++))
done
echo "$servers hosts checked"
''', shell=True)
One of the problems with shell script (or I guess Powershell or cmd batch script if you are in that highly unfortunate predicament) is that doing what you are vaguely describing is often hard to do with a bunch of unconnected processes. E.g. curl has a crude way to maintain a session between separate invocations by keeping a "cookie jar" which allows one curl to pass on login credentials etc to an otherwise independent curl call later on, but there is no good, elegant, general mechanism for this. If at all possible, doing these things from within Python would probably make your script more robust as well as simpler and more coherent.

ssh session as python subprocess takes input but does not print it to stdout

I'm trying use python's cmd library to create a shell with limited commands. One requirement I have is to be able to run a command that executes an existing shell script which opens an ssh session on a remote machine and from there allows the user to interact with the remote shell as if it was a regular ssh session.
Simply using subprocess.Popen('[/path/to/connect.sh]') works well at least as a starting point except for one issue. You can interact with the remote shell but the input that you type is not shown on stdout...so for example you see the prompt on your stdout but when you type 'ls' you don't see it being typed but when you hit return it works as expected.
I'm trying to wrap my head around how to print the input to stdout and still send it along to the remote ssh session.
EDIT:
Actual code without using cmd was just the one line:
ssh_session = subprocess.Popen(['connect.sh'])
it was fired from a do_* method in a class which extended cmd.Cmd. I think I may end up using paramiko but would still be interested in anyone's input on this.
Assuming you are using a Unix like system, SSH detects if you are on a terminal or not. When it detects that you are not on a terminal, like when using subprocess, it will not echo the characters typed. Instead you might want to use a pseudo-terminal, see pexpect, or pty. This way you can get the output from SSH as if it was running on a true terminal.

run ssh command remotely without redirecting output

I want to run a python script on my server (that python script has GUI). But I want to start it from ssh. Something like this:
ssh me#server -i my_key "nohup python script.py"
... > let the script run forever
BUT it complains "unable to access video driver" since it is trying to use my ssh terminal as output.
Can I somehow make my commands output run on server machine and not to my terminal... Basically something like "wake-on-lan functionality" -> tell the server you want something and he will do everything using its own system (not sending any output back)
What about
ssh me#server -i my_key "nohup python script.py >/dev/null 2>&1"
You can use redirection to some remote logfile instead of /dev/null of course.
? :)
EDIT: GUI applications on X usually use $DISPLAY variable to know where they should be displayed. Moreover, X11 display servers use authorization to permit or disallow applications connecting to its display. Commands
export DISPLAY=:0 && xhost +
may be helpful for you.
Isn't it possible for you to rather use python ssh extension instead of calling external application?
It would:
run as one process
guarantee that invocation will be the same among all possible system
lose the overhead from "execution"
send everything trough ssh (you won't have to worry about input like "; possibly local executed command)
If not, go with what Piotr Wades suggested.

SQLite error when loading two python scripts simultaneously

I have two python scripts that have to run simultaneously because they interact with each other. One script is a 'server' script running locally and the other is client script that connects to it via a socket. Normally I just open a couple terminal tabs and run the server script in one and the client in the other. After starting and stopping each script over and over, I wanted to make a bash alias to run both scripts with just one command and came up with this:
gnome-terminal --tab -e "python server.py" --tab -e "python client.py"
However, now the server script is raising an sqlite OperationalError saying that one of my data tables doesn't exist. But when I run the scripts manually everything works fine. I have no clue what is going on, but I thought that maybe running the scripts together wasn't giving the server script enough time to initialize and make its connection to the database. So I put a time.sleep(5) in the client script, but as soon as it starts I get the same error.
Anyone have an idea what could be happening? Or does anyone know of any alternatives for starting two python scripts with one command?
Try combining the two commands into one:
gnome-terminal --tab -x bash -c "python server.py & sleep 5; python client.py"
I think it is better to put the sleep command (if needed) outside client since there may be situations where the server is already started and the client does not have to sleep.
The -x flag means
-x, --execute
Execute the remainder of the command line inside the terminal.
The command calls bash:
bash -c "python server.py & sleep 5; python client.py"
bash in turn, has a -c flag which means
-c string If the -c option is present, then commands are read from string. If
there are arguments after the string, they are assigned to the posi‐
tional parameters, starting with $0.
You might want to experiment with
gnome-terminal --tab -e "python server.py & sleep 5; python client.py"
That might work too. When you run bash first, then your ~/.bashrc is read. Without calling bash, I think by default, /bin/sh is called instead.
If you get
"socket.error: [Errno 98] Address already in use",
it probably means that your server has already been started, and running the server a second time fails.

Categories