basic paramiko exec_command help - python

I'm a new paramiko user and am having difficulty running commands on a remote server with paramiko. I want to export a path and also run a program called tophat in the background. I can login fine with paramiko.sshclient() but my code to exec_command has no results.
stdin, stdout, sterr = ssh.exec_command('export PATH=$PATH:/proj/genome/programs
/tophat-1.3.0/bin:/proj/genome/programs/cufflinks-1.0.3/bin:/proj/genome/programs/
bowtie-0.12.7:/proj/genome/programs/samtools-0.1.16')
stdin, stdout, sterr = ssh.exec_command('nohup tophat -o /output/path/directory -I
10000 -p 8 --microexon-search -r 50 /proj/genome/programs/bowtie-0.12.7/indexes
/ce9 /input/path/1 /input/path/2 &')
there is no nohup.out file and python just goes to the next line with no error messages. I have tried without nohup as well and the result is the same. I was trying to follow this paramiko tutorial.
am I using exec_command incorrectly?

I also ran into the same issue and after looking at this article and this answer, I see the solution is to call the recv_exit_status() method of the Channel. Here is my code:
import paramiko
import time
cli = paramiko.client.SSHClient()
cli.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
cli.connect(hostname="10.66.171.100", username="mapping")
stdin_, stdout_, stderr_ = cli.exec_command("ls -l ~")
# time.sleep(2) # Previously, I had to sleep for some time.
stdout_.channel.recv_exit_status()
lines = stdout_.readlines()
for line in lines:
print line
cli.close()
Now my code will be blocked until the remote command is finished. This method is explained here, and please pay some attention to the warning.

exec_command() is non blocking, and it just sends the command to the server then Python will run the following code.
I think you should wait for the command execution ends and do the rest work after that.
"time.sleep(10)" could help which requires "import time".
Some examples show that you could read from the stdout ChannelFile object, or simply using stdout.readlines(), it seems to read all the response from the server, guess this could help.
Your code, the above 2 lines of exec_command, they're actually running in different exec sessions. I'm not sure if this has some impact in your case.
I'd suggest you take a look at the demos in the demos folder, they're using Channel class, which has better API to do blocking / nonblocking sending for both shell and exec.

You better to load the bash_profile before you run your command. Otherwise you may get a 'command not found' exception.
For example,I write the command command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql' in the purpose of dumping a Mysql table
Then I have to load the bash_profile manually before that dumping command by typing . ~/.profile; .~/.bash_profile;.
Example
my_command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql;'
pre_command = """
. ~/.profile;
. ~/.bash_profile;
"""
command = pre_command + my_command
stdin, stdout, stderr = ssh.exec_command(command)

Related

Paramiko exec_command only works when printing exit status [duplicate]

This question already has an answer here:
Paramiko SSH exec_command (shell script) returns before completion
(1 answer)
Closed 1 year ago.
I am writing a program in python on Ubuntu. In that program I am trying to print a message after completing a task "Delete a File" on Remote machine (RaspberryPi), connected to network.
But In actual practice, print command is not waiting till completion of task on remote machine.
Can anybody guide me on how do I do that?
My Coding is given below
import paramiko
# Connection with remote machine
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.2.34', username='pi', password='raspberry')
filename = 'fahad.txt'
filedelete ='rm ' + filename
stdin, stdout, stderr = client.exec_command(filedelete)
print ("File Deleted")
client.close()
This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...
As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:
Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).
In your particular case, you need the later:
stdin, stdout, stderr = client.exec_command(filedelete) # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()
In addition to doing what Sylvian Leroux suggests:
If your commands involve running a bash script that needs to keep running after paramiko closes the ssh session (every time you send a command this happens) use:
nohup ./my_bash_script.sh >/dev/null 2>&1.
nohup tells the system that this process should ignore the "hang up" signal received when the ssh session is closed.
>/dev/null 2>&1 redirects the output. This is necessary because in some situations control will not be given back to your python script until an output is received.
To run command line applications like "stress" and "vlc" and keep them running after you return, the only solution I have found is to put your commands in a bash script followed by a & or &>/dev/null then call that bash script with paramiko using the method I mention in the previous paragraph.
This seems a bit "hacky" but it is the only way I have found after days of searching.

Python Paramiko "exec_command" does not execute - Django

I am facing an issue with the Python Paramiko library in my Django Application
This is a function I have written to create an SFTP connection:
def createSFTPConnection(request,temp_pwd):
ssh=paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
user = User.objects.get(username=request.user)
ssh.connect(hostname=temp_host,username=user.username,password=temp_pwd,port=22)
sftp_client=ssh.open_sftp()
return ssh,user,sftp_client
This just returns me the variable for ssh, the username, and sftp_client
Then I execute a command on the remote server using this code -
ssh,user,sftp_client=createSFTPConnection(request,temp_pwd) # passes the password on that server for the user for creating the connection
cmd_to_execute="(cd "+temporary_path+"; sh temp.sh"+" "+var1+" "+var2+")" # executing a shell script by passing it 2 variables
stdin, stdout, stderr = ssh.exec_command(cmd_to_execute) # actual call
print("stderr: ", stderr.readlines())
print("pwd: ", stdout.readlines())
Now, this code works fine and executes the script "temp.sh" on the remote server, but it takes a lot of time as I am returning stdin, stdout and stderr and printing them on the console
But, since I don't want that I removed the readlines() calls from there making my code look like this -
cmd_to_execute="(cd "+temporary_path+"; sh temp.sh"+" "+var1+" "+var2+")"
stdin, stdout, stderr = ssh.exec_command(cmd_to_execute) # actual call
But for some reason, this code just doesn't execute on the remote server after removing the readlines() calls
Thus, making me think that exec_command does not work without a readlines() call ahead of it
And I don't know why this is happening..
Any help would be highly appreciable!!
Thanks!!
For your info -
This is the Django code after the readlines() calls -
usr_msg="Your file has been uploaded successfully!! This is your variable: "+var1
messages.success(request, usr_msg)
ssh.close()
sftp_client.close()
return redirect("/execute/all")
The SSHClient.exec_command only starts an execution of the command. If you do not wait for it to complete and immediately kill the session, the command is killed along with it.
If you want to keep the command running even after you disconnect, you need to detach it from the session.
See Running process of remote SSH server in the background using Python Paramiko.
It's basically not a Python/Paramiko question, see also Getting ssh to execute a command in the background on target machine.
First, make it working in ssh/plink/whatever-client before trying to implement it in Python. Something like:
ssh user#example.com "cd path; nohup sh script.sh /dev/null 2>&1 &"

Pseudo terminal will not be allocated error - ssh - sudo - websocket - subprocess

I basically want to create a web page through which a unix terminal at the server side can be reached and commands can be sent to and their results can be received from the terminal.
For this, I have a WSGIServer. When a connection is opened, I execute the following:
def opened(self):
self.p = Popen(["bash", "-i"], bufsize=1, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
self.p.stdout = Unbuffered(self.p.stdout)
self.t = Thread(target=self.listen_stdout)
self.t.daemon = True
self.t.start()
When a message comes to the server from the client, It is handled in the following function, which only redirects the coming message to the stdin of subprocess p which is an interactive bash:
def received_message(self, message):
print(message.data, file=self.p.stdin)
Then outputs of the bash is read in the following function within a separate thread t. It only sends the outputs to the client.
def listen_stdout(self):
while True:
c = self.p.stdout.read(1)
self.send(c)
In such a system, I am able to send any command(ls, cd, mkdir etc.) to the bash working at the server side and receive their outputs. However, when I try to run ssh xxx#xxx, the error pseudo-terminal will not be allocated because stdin is not a terminal is shown.
Also, in a similar way, when I run sudo ..., the prompt for password is not sent to the client somehow, but it appears on the terminal of the server script, instead.
I am aware of expect; however, only for such sudo and ssh usage, I do not want to mess my code up with expect. Instead, I am looking for a general solution that can fake sudo and ssh and redirect prompt's to the client.
Is there any way to solve this? Ideas are appreciated, thanks.
I found the solution. What I need was creating a pseudo-terminal. And, at the slave side of the tty, make a setsid() call to make this process a new session and run commands on it.
Details are here:
http://www.rkoucha.fr/tech_corner/pty_pdip.html

Subprocess Popen not capturing wget --spider command result

My understanding of capturing the output of a subprocess command as a string was to set stdout=sucprocess.PIPE and use command.communicate() to capture result, error.
For example, typing the following:
command = subprocess.Popen(["nmcli", "con"], stdout=subprocess.PIPE)
res, err = command.communicate()
produces no output to the terminal and stores all my connection information as a byte literal in the variable res. Simple.
It falls apart for me here though:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = subprocess.Popen(["wget", "--spider", url], stdout=subprocess.PIPE)
This prints the output of the command to the terminal, then pauses execution until a keystroke is input by user. Subsequently running command.communicate() returns an empty bytes literal, b''.
Particularly odd to me is the pause in execution as issuing the command in bash just prints the command result and directly returns to the prompt.
All my searches just find Q&A about how to capture subprocess results in general, not anything about certain commands having to be captured in a different manner or anything particular about wget and subprocess.
Additional note, I have been able to use the wget command with subprocess to download files (no --spider option) without issue.
Any help greatly appreciated, this one has me stumped.
stderr is capturing the output so because you are not piping stderr you are seeing the output when you run the command and stdout is empty:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = Popen(["wget", "--spider", url],stdout=PIPE,stderr=PIPE)
out,err = command.communicate()
print("This is stdout: {}".format(out))
print("This is stderr: {}".format(err))
This is stdout: b''
This is stderr: b'Spider mode enabled. Check if remote file exists.\n--2015-02-09 18:00:28-- http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent\nResolving torrent.ubuntu.com (torrent.ubuntu.com)... 91.189.95.21\nConnecting to torrent.ubuntu.com (torrent.ubuntu.com)|91.189.95.21|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37429 (37K) [application/x-bittorrent]\nRemote file exists.\n\n'
I've never been asked anything by wget before, but some processes (e.g. ssh) do capture the terminal device (tty) directly to get a password, short-cutting the process pipe you've set up.
To automate cases like this, you need to fake a terminal instead of a normal pipe. There are recipes out there using termios and stuff, but my suggestion would be to use the module "pexpect" which is written to do exactly that.

Persistent ssh session in Python using Popen

I am creating a movie controller (Pause/Stop...) using python where I ssh into a remote computer, and issue commands into a named pipe like so
echo -n q > ~/pipes/pipename
I know this works if I ssh via the terminal and do it myself, so there is no problem with the setup of the named pipe redirection. My problem is that setting up an ssh session takes time (1-3 seconds), whereas I want the pause command to be instantaneous. Therefore, I thought of setting up a persistent pipe like so:
controller = subprocess.Popen ( "ssh -T -x <hostname>", shell = True, close_fds = True, stdin=subprocess.PIPE, stderr=subprocess.PIPE, stdout=subprocess.PIPE )
Then issue commands to it like so
controller.stdin.write ( 'echo -n q > ~/pipes/pipename' )
I think the problem is that ssh is interactive so it expects a carriage return. This is where my problems begin, as nearly everyone who has asked this question has been told to use an existing module:
Vivek's answer
Chakib's Answer
shx2's Answer
Crafty Thumber's Answer
Artyom's Answer
Jon W's Answer
Which is fine, but I am so close. I just need to know how to include the carriage return, otherwise, I have to go learn all these other modules, which mind you is not trivial (for example, right now I can't figure out how pexpect uses either my /etc/hosts file or my ssh keyless authentications).
To add a newline to the command, you will need to add a newline to the string:
controller.stdin.write('\n')
You may also need to flush the pipe:
controller.stdin.flush()
And of course the controller has to be ready to receive new data, or you could block forever trying to send it data. (And if the reason it's not ready is that it's blocking forever waiting for you to read from its stdout, which is possible on some platforms, you're deadlocked unrecoverably.)
I'm not sure why it's not working the way you have it set up, but I'll take a stab at this. I think what I would do is change the Popen call to:
controller = subprocess.Popen("ssh -T -x <hostname> \"sh -c 'cat > ~/pipes/pipename'\"", ...
And then simply controller.stdin.write('q').

Categories