This question already has answers here:
Paramiko with continuous stdout
(2 answers)
Closed last year.
all ,
Please help me with the following issue.
I have a linux device, to which I connect via SSH using PARAMIKO module in a python script.
The issue is that basically, if I send a command to the device, link "ping google.com" the output will be only shown after the command is finished ( in this case never) so I have to use "ping -n 2 google.com for example) and the output will be shown only after those 2 pings to google.com were completed.
The thing is that not all applications support a timeout parameter, and for example I am not able to use the "tail -f /file" because the tailing will never finish so the output will be never printed, nor to get other live traces which are partially printed by the device.
The script is now used in this way.
elif action == "cmd":
cmd = command
(stdin, stdout, stderr) = ssh.exec_command(cmd)
out = stdout.readline()
if out:
print(out.replace("â", "").replace("ââ", "")) # only for case "systemctl --failed"
else:
print(stderr.read().decode('ISO-8859-1'))
else:
print("Your action: {} is invalid".format(action))
sys.exit(11)
I'm a novice and unfortunately, I can't test this right now, but the output of the command should write to stdout even if the output takes some time. If you move on to the next command and read stdout before the device has been able to write to it then you won't see your output. if you put in a sleep command and then read the stdout, does it show the results of your ping?
from time import sleep
stdin, stdout, stderr = ssh.exec_command('ping 8.8.8.8')
sleep(5)
output = stdout.readlines()
print(output)
Related
I'm making a shell with python. So far I have gotten cd to work (not pretty I know, but it's all I need for now). When I su root (for example) I get a root shell, but I can't capture the output I receive after running a command. However the shell does accept my commands, as when I type exit it exits. Is there a way to capture the output of a 'new' shell?
import os, subprocess
while True:
command = input("$ ")
if len(command.split(" ")) >= 2:
print(command.split(" ")[0]) #This line is for debugging
if command.split(" ")[0] == "cd" or command.split(" ")[1] == "cd":
os.chdir(command.split(" ")[command.split(" ").index("cd") + 1])
continue
process = subprocess.Popen(command.split(), stdout=subprocess.PIPE, universal_newlines=True)
output, error = process.communicate()
print(output.strip("\n"))
EDIT: To make my request a bit more precise, I'd like a way to authenticate as another user from a python script, basically catching the authentication, doing it in the background and then starting a new subprocess.
You really need to understand how subprocess.Popen works. This command executes a new sub-process (on a Unix machine, calls fork and then exec). The new sub-process is a separate process. Your code just calls communicate once and then discards of it.
If you just create a new shell by calling subprocess.Popen and then running su <user> inside of it, the shell will be closed right after that and the next time, you'll be running the command using the same (original) user again.
What you want is probably to create a single subprocess at the beginning of your application and then be a sort of a proxy between the user and the underlying process, and then just keep writing to its stdin and reading from stdout.
Here's an example:
import os, subprocess
process = subprocess.Popen(["bash"], stdin=subprocess.PIPE,
stdout=subprocess.PIPE, universal_newlines=True)
while True:
command = input("$ ")
process.stdin.write(command + "\n")
process.stdin.flush()
output = process.stdout.readline()
print(output.strip("\n"))
(I removed the cd command parsing bit because it wasn't constructive to understanding the solution here, but you can definitely add specific handlers for specific inputs that wrap the underlying shell)
This question already has an answer here:
Paramiko SSH exec_command (shell script) returns before completion
(1 answer)
Closed 1 year ago.
I am writing a program in python on Ubuntu. In that program I am trying to print a message after completing a task "Delete a File" on Remote machine (RaspberryPi), connected to network.
But In actual practice, print command is not waiting till completion of task on remote machine.
Can anybody guide me on how do I do that?
My Coding is given below
import paramiko
# Connection with remote machine
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.2.34', username='pi', password='raspberry')
filename = 'fahad.txt'
filedelete ='rm ' + filename
stdin, stdout, stderr = client.exec_command(filedelete)
print ("File Deleted")
client.close()
This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...
As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:
Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).
In your particular case, you need the later:
stdin, stdout, stderr = client.exec_command(filedelete) # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()
In addition to doing what Sylvian Leroux suggests:
If your commands involve running a bash script that needs to keep running after paramiko closes the ssh session (every time you send a command this happens) use:
nohup ./my_bash_script.sh >/dev/null 2>&1.
nohup tells the system that this process should ignore the "hang up" signal received when the ssh session is closed.
>/dev/null 2>&1 redirects the output. This is necessary because in some situations control will not be given back to your python script until an output is received.
To run command line applications like "stress" and "vlc" and keep them running after you return, the only solution I have found is to put your commands in a bash script followed by a & or &>/dev/null then call that bash script with paramiko using the method I mention in the previous paragraph.
This seems a bit "hacky" but it is the only way I have found after days of searching.
This question already has an answer here:
Paramiko SSH exec_command (shell script) returns before completion
(1 answer)
Closed 1 year ago.
I am writing a program in python on Ubuntu. In that program I am trying to print a message after completing a task "Delete a File" on Remote machine (RaspberryPi), connected to network.
But In actual practice, print command is not waiting till completion of task on remote machine.
Can anybody guide me on how do I do that?
My Coding is given below
import paramiko
# Connection with remote machine
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect('192.168.2.34', username='pi', password='raspberry')
filename = 'fahad.txt'
filedelete ='rm ' + filename
stdin, stdout, stderr = client.exec_command(filedelete)
print ("File Deleted")
client.close()
This is indeed a duplicate of paramiko SSH exec_command(shell script) returns before completion, but the answer there is not terribly detailed. So...
As you noticed, exec_command is a non-blocking call. So you have to wait for completion of the remote command by using either:
Channel.exit_status_ready if you want a non-blocking check of the command completion (i.e.: pooling)
Channel.recv_exit_status if you want to block until the command completion (and get back the exit status — an exit status of 0 means normal completion).
In your particular case, you need the later:
stdin, stdout, stderr = client.exec_command(filedelete) # Non-blocking call
exit_status = stdout.channel.recv_exit_status() # Blocking call
if exit_status == 0:
print ("File Deleted")
else:
print("Error", exit_status)
client.close()
In addition to doing what Sylvian Leroux suggests:
If your commands involve running a bash script that needs to keep running after paramiko closes the ssh session (every time you send a command this happens) use:
nohup ./my_bash_script.sh >/dev/null 2>&1.
nohup tells the system that this process should ignore the "hang up" signal received when the ssh session is closed.
>/dev/null 2>&1 redirects the output. This is necessary because in some situations control will not be given back to your python script until an output is received.
To run command line applications like "stress" and "vlc" and keep them running after you return, the only solution I have found is to put your commands in a bash script followed by a & or &>/dev/null then call that bash script with paramiko using the method I mention in the previous paragraph.
This seems a bit "hacky" but it is the only way I have found after days of searching.
My understanding of capturing the output of a subprocess command as a string was to set stdout=sucprocess.PIPE and use command.communicate() to capture result, error.
For example, typing the following:
command = subprocess.Popen(["nmcli", "con"], stdout=subprocess.PIPE)
res, err = command.communicate()
produces no output to the terminal and stores all my connection information as a byte literal in the variable res. Simple.
It falls apart for me here though:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = subprocess.Popen(["wget", "--spider", url], stdout=subprocess.PIPE)
This prints the output of the command to the terminal, then pauses execution until a keystroke is input by user. Subsequently running command.communicate() returns an empty bytes literal, b''.
Particularly odd to me is the pause in execution as issuing the command in bash just prints the command result and directly returns to the prompt.
All my searches just find Q&A about how to capture subprocess results in general, not anything about certain commands having to be captured in a different manner or anything particular about wget and subprocess.
Additional note, I have been able to use the wget command with subprocess to download files (no --spider option) without issue.
Any help greatly appreciated, this one has me stumped.
stderr is capturing the output so because you are not piping stderr you are seeing the output when you run the command and stdout is empty:
url = "http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent"
command = Popen(["wget", "--spider", url],stdout=PIPE,stderr=PIPE)
out,err = command.communicate()
print("This is stdout: {}".format(out))
print("This is stderr: {}".format(err))
This is stdout: b''
This is stderr: b'Spider mode enabled. Check if remote file exists.\n--2015-02-09 18:00:28-- http://torrent.ubuntu.com/xubuntu/releases/trusty/release/desktop/xubuntu-14.04.1-desktop-amd64.iso.torrent\nResolving torrent.ubuntu.com (torrent.ubuntu.com)... 91.189.95.21\nConnecting to torrent.ubuntu.com (torrent.ubuntu.com)|91.189.95.21|:80... connected.\nHTTP request sent, awaiting response... 200 OK\nLength: 37429 (37K) [application/x-bittorrent]\nRemote file exists.\n\n'
I've never been asked anything by wget before, but some processes (e.g. ssh) do capture the terminal device (tty) directly to get a password, short-cutting the process pipe you've set up.
To automate cases like this, you need to fake a terminal instead of a normal pipe. There are recipes out there using termios and stuff, but my suggestion would be to use the module "pexpect" which is written to do exactly that.
I'm a new paramiko user and am having difficulty running commands on a remote server with paramiko. I want to export a path and also run a program called tophat in the background. I can login fine with paramiko.sshclient() but my code to exec_command has no results.
stdin, stdout, sterr = ssh.exec_command('export PATH=$PATH:/proj/genome/programs
/tophat-1.3.0/bin:/proj/genome/programs/cufflinks-1.0.3/bin:/proj/genome/programs/
bowtie-0.12.7:/proj/genome/programs/samtools-0.1.16')
stdin, stdout, sterr = ssh.exec_command('nohup tophat -o /output/path/directory -I
10000 -p 8 --microexon-search -r 50 /proj/genome/programs/bowtie-0.12.7/indexes
/ce9 /input/path/1 /input/path/2 &')
there is no nohup.out file and python just goes to the next line with no error messages. I have tried without nohup as well and the result is the same. I was trying to follow this paramiko tutorial.
am I using exec_command incorrectly?
I also ran into the same issue and after looking at this article and this answer, I see the solution is to call the recv_exit_status() method of the Channel. Here is my code:
import paramiko
import time
cli = paramiko.client.SSHClient()
cli.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
cli.connect(hostname="10.66.171.100", username="mapping")
stdin_, stdout_, stderr_ = cli.exec_command("ls -l ~")
# time.sleep(2) # Previously, I had to sleep for some time.
stdout_.channel.recv_exit_status()
lines = stdout_.readlines()
for line in lines:
print line
cli.close()
Now my code will be blocked until the remote command is finished. This method is explained here, and please pay some attention to the warning.
exec_command() is non blocking, and it just sends the command to the server then Python will run the following code.
I think you should wait for the command execution ends and do the rest work after that.
"time.sleep(10)" could help which requires "import time".
Some examples show that you could read from the stdout ChannelFile object, or simply using stdout.readlines(), it seems to read all the response from the server, guess this could help.
Your code, the above 2 lines of exec_command, they're actually running in different exec sessions. I'm not sure if this has some impact in your case.
I'd suggest you take a look at the demos in the demos folder, they're using Channel class, which has better API to do blocking / nonblocking sending for both shell and exec.
You better to load the bash_profile before you run your command. Otherwise you may get a 'command not found' exception.
For example,I write the command command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql' in the purpose of dumping a Mysql table
Then I have to load the bash_profile manually before that dumping command by typing . ~/.profile; .~/.bash_profile;.
Example
my_command = 'mysqldump -uu -pp -h1.1.1.1 -P999 table > table.sql;'
pre_command = """
. ~/.profile;
. ~/.bash_profile;
"""
command = pre_command + my_command
stdin, stdout, stderr = ssh.exec_command(command)