SSH and exec channels with python shell - python

We have implemented a python shell for our hardware devices that solely consists of the python cmd module on embedded linux. Our (non-root) user's shell is set to the path of this python shell in /etc/passwd and /etc/shadow. Code example below:
#!/usr/bin/env python
import cmd
class OurCmdProcessor(cmd.Cmd, object):
def __init__(self):
cmd.Cmd.__init__(self)
...
def cmdloop(self, intro = None):
"""Command loop.
Overrides cmd class cmdloop() method.
"""
signal.signal(signal.SIGINT, self._sigint_handler)
try:
cmd.Cmd.cmdloop(self, intro = intro)
except:
print("{} v2 exception!".format(branding))
#traceback.print_exc(file=sys.stdout)
sys.stdout.flush()
# This *exits* the cmd shell on exception
self.do_EOF()
def do_help(self):
print("Here is some help text!")
etc...
Previously, one of our clients had used SSH.NET to issue command line commands using that library's RunCommand function, which sets up a standard SSH 'exec' request to go over the SSH connection, and then parses the output and return value. (i.e. request channel, channel success, send command, etc..)
Now, that call doesn't work, presumably because we've switched from /bin/sh to this python shell. What does work is using that library's SSH Shell object to send commands over by putting the command text followed by a newline, then scanning the output.
What I'm asking is, is it possible to implement something in that shell to handle the standard SSH 'exec' command this library is issuing, or does part of the shell output that is received upon executing a command AFTER issuing an SSH shell request include the exit value already? We don't want to include exit values as part of the command printable output.
We are using dropbear SSH server on our embedded linux device.

To work in this mode, your Python script should be able to parse its command-line arguments in the same way it parses arguments given interactively to your Cmd instance.
That is:
./yourpython -c "some command"
should work identically to:
./yourpython <<EOF
some command
EOF
...and should have an exit status that reflects whether the last command to be executed succeeded.
This is equivalent to how ssh hostname 'some command' runs "${SHELL:-sh}" -c 'some command' on the remote host.

Related

How to run a shell script in background and capture it's output using Python?

I am working on click application. Once of the subcommands runs a shell script using subprocess module.
callback for the command:
def run_script(script, *args):
LOGGER.info(f"Runs 'script' and passes in necessary 'args'")
LOGGER.debug(pformat(locals()))
command = [script]
for value in args:
command.append(value)
LOGGER.debug(f"command: {command}")
return run(command, capture_output=True, check=True) # type: ignore
However the problem is this blocks the terminal until the script has finished executing. I believe using Popen will solve my issues. But I also want the output generated by the shell script to be redirected to the LOGGER (which in this case is a file).

Why are these python print lines indented? Context: port forwarding with ssh using python Popen

I have this piece of code that is supposed to use subprocess.Popen to forward some ports in the background. It gives unexpectedly indented print statements -- any idea what went wrong here?
Also if you have a better way of port forwarding (in the background) with python, I'm all ears! I've just been opening another terminal and running the SSH command, but surely there's a way to do that programmatically, right?
The code:
import subprocess
print("connecting")
proc = subprocess.Popen(
["ssh", f"-L10005:[IP]:10121",
f"[USERNAME]#[ANOTHER IP]"],
stdout=subprocess.PIPE
)
for _ in range(100):
realtime_output = str(proc.stdout.readline(), "utf-8")
if "[IP]" in realtime_output:
print("connected")
break
... other code that uses the forwarded ports
print("terminating")
proc.terminate()
Expected behavior (normal print lines):
$ python test.py
connecting
connected
terminating
Actual behavior (wacky print lines):
$ python test.py
connecting
connected
terminating
$ [next prompt is here for some reason?]
This is likely because ssh is opening up a full shell on the remote machine (if you type in some commands, they'll probably be run remotely!). You should disable this by passing -N so it doesn't run anything. If you don't ever need to type anything into ssh (i.e. entering passwords or confirming host keys), you can also pass -n so it doesn't read from stdin at all. With that said, it looks like you also can do this entirely within Python with the Fabric library, specifically Connection.forward_local().
The indented line weirdness is due to either ssh or the remote shell changing some terminal settings, one of which adds carriage returns before newlines that get sent to the terminal. When this is disabled, each line will start at the horizontal position of the end of the previous line:
$ stty -onlcr; printf 'foo\nbar\nbaz\n'; stty onlcr
foo
bar
baz
$

Python Paramiko "exec_command" does not execute - Django

I am facing an issue with the Python Paramiko library in my Django Application
This is a function I have written to create an SFTP connection:
def createSFTPConnection(request,temp_pwd):
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
user = User.objects.get(username=request.user)
ssh.connect(hostname=temp_host,username=user.username,password=temp_pwd,port=22)
sftp_client=ssh.open_sftp()
return ssh,user,sftp_client
This just returns me the variable for ssh, the username, and sftp_client
Then I execute a command on the remote server using this code -
ssh,user,sftp_client=createSFTPConnection(request,temp_pwd) # passes the password on that server for the user for creating the connection
cmd_to_execute="(cd "+temporary_path+"; sh temp.sh"+" "+var1+" "+var2+")" # executing a shell script by passing it 2 variables
stdin, stdout, stderr = ssh.exec_command(cmd_to_execute) # actual call
print("stderr: ", stderr.readlines())
print("pwd: ", stdout.readlines())
Now, this code works fine and executes the script "temp.sh" on the remote server, but it takes a lot of time as I am returning stdin, stdout and stderr and printing them on the console
But, since I don't want that I removed the readlines() calls from there making my code look like this -
cmd_to_execute="(cd "+temporary_path+"; sh temp.sh"+" "+var1+" "+var2+")"
stdin, stdout, stderr = ssh.exec_command(cmd_to_execute) # actual call
But for some reason, this code just doesn't execute on the remote server after removing the readlines() calls
Thus, making me think that exec_command does not work without a readlines() call ahead of it
And I don't know why this is happening..
Any help would be highly appreciable!!
Thanks!!
For your info -
This is the Django code after the readlines() calls -
usr_msg="Your file has been uploaded successfully!! This is your variable: "+var1
messages.success(request, usr_msg)
ssh.close()
sftp_client.close()
return redirect("/execute/all")
The SSHClient.exec_command only starts an execution of the command. If you do not wait for it to complete and immediately kill the session, the command is killed along with it.
If you want to keep the command running even after you disconnect, you need to detach it from the session.
See Running process of remote SSH server in the background using Python Paramiko.
It's basically not a Python/Paramiko question, see also Getting ssh to execute a command in the background on target machine.
First, make it working in ssh/plink/whatever-client before trying to implement it in Python. Something like:
ssh user#example.com "cd path; nohup sh script.sh /dev/null 2>&1 &"

Pseudo terminal will not be allocated error - ssh - sudo - websocket - subprocess

I basically want to create a web page through which a unix terminal at the server side can be reached and commands can be sent to and their results can be received from the terminal.
For this, I have a WSGIServer. When a connection is opened, I execute the following:
def opened(self):
self.p = Popen(["bash", "-i"], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
self.p.stdout = Unbuffered(self.p.stdout)
self.t = Thread(target=self.listen_stdout)
self.t.daemon = True
self.t.start()
When a message comes to the server from the client, It is handled in the following function, which only redirects the coming message to the stdin of subprocess p which is an interactive bash:
def received_message(self, message):
print(message.data, file=self.p.stdin)
Then outputs of the bash is read in the following function within a separate thread t. It only sends the outputs to the client.
def listen_stdout(self):
while True:
c = self.p.stdout.read(1)
self.send(c)
In such a system, I am able to send any command(ls, cd, mkdir etc.) to the bash working at the server side and receive their outputs. However, when I try to run ssh xxx#xxx, the error pseudo-terminal will not be allocated because stdin is not a terminal is shown.
Also, in a similar way, when I run sudo ..., the prompt for password is not sent to the client somehow, but it appears on the terminal of the server script, instead.
I am aware of expect; however, only for such sudo and ssh usage, I do not want to mess my code up with expect. Instead, I am looking for a general solution that can fake sudo and ssh and redirect prompt's to the client.
Is there any way to solve this? Ideas are appreciated, thanks.
I found the solution. What I need was creating a pseudo-terminal. And, at the slave side of the tty, make a setsid() call to make this process a new session and run commands on it.
Details are here:
http://www.rkoucha.fr/tech_corner/pty_pdip.html

basic paramiko exec_command help

I'm a new paramiko user and am having difficulty running commands on a remote server with paramiko. I want to export a path and also run a program called tophat in the background. I can login fine with paramiko.sshclient() but my code to exec_command has no results.
stdin, stdout, sterr = ssh.exec_command('export PATH=$PATH:/proj/genome/programs
/tophat-1.3.0/bin:/proj/genome/programs/cufflinks-1.0.3/bin:/proj/genome/programs/
bowtie-0.12.7:/proj/genome/programs/samtools-0.1.16')
stdin, stdout, sterr = ssh.exec_command('nohup tophat -o /output/path/directory -I
10000 -p 8 --microexon-search -r 50 /proj/genome/programs/bowtie-0.12.7/indexes
/ce9 /input/path/1 /input/path/2 &')
there is no nohup.out file and python just goes to the next line with no error messages. I have tried without nohup as well and the result is the same. I was trying to follow this paramiko tutorial.
am I using exec_command incorrectly?
I also ran into the same issue and after looking at this article and this answer, I see the solution is to call the recv_exit_status() method of the Channel. Here is my code:
import paramiko
import time
cli = paramiko.client.SSHClient()
cli.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
cli.connect(hostname="10.66.171.100", username="mapping")
stdin_, stdout_, stderr_ = cli.exec_command("ls -l ~")
# time.sleep(2) # Previously, I had to sleep for some time.
stdout_.channel.recv_exit_status()
lines = stdout_.readlines()
for line in lines:
print line
cli.close()
Now my code will be blocked until the remote command is finished. This method is explained here, and please pay some attention to the warning.
exec_command() is non blocking, and it just sends the command to the server then Python will run the following code.
I think you should wait for the command execution ends and do the rest work after that.
"time.sleep(10)" could help which requires "import time".
Some examples show that you could read from the stdout ChannelFile object, or simply using stdout.readlines(), it seems to read all the response from the server, guess this could help.
Your code, the above 2 lines of exec_command, they're actually running in different exec sessions. I'm not sure if this has some impact in your case.
I'd suggest you take a look at the demos in the demos folder, they're using Channel class, which has better API to do blocking / nonblocking sending for both shell and exec.
You better to load the bash_profile before you run your command. Otherwise you may get a 'command not found' exception.
For example,I write the command command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql' in the purpose of dumping a Mysql table
Then I have to load the bash_profile manually before that dumping command by typing . ~/.profile; .~/.bash_profile;.
Example
my_command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql;'
pre_command = """
. ~/.profile;
. ~/.bash_profile;
"""
command = pre_command + my_command
stdin, stdout, stderr = ssh.exec_command(command)

Categories