I'm trying to use the spur library to launch a long-running command via ssh then read and process the output from it one line at a time. The documentation says you can pass a file object using stdout=f and run/spawn will call stdout.write for anything the subprocess writes to its stdout stream. I hit on the idea of creating an os.pipe() to make this work, but it doesn't. Can someone please suggest a fix.
NOTE: I've already got this working with paramiko.SSHClient.exec_command but the interface is a bit low-level for my needs, so I want to learn how to do it with spur. Thanks!
import spur
import os
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
r, w = os.pipe()
r = os.fdopen(r, 'rb')
w = os.fdopen(w, 'wb')
ssh = spur.SshShell(hostname=HOST, username=USER)
child = ssh.spawn(CMD, stdout=w)
for line in iter(r.readline, ""):
print(line, end="")
Since someone is bound to ask, the parakimo code looks like this:-
from paramiko import SSHClient
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
ssh = SSHClient()
ssh.load_system_host_keys()
ssh.connect(HOST, username=USER)
stdin, stdout, stderr = ssh.exec_command(CMD)
for line in iter(stdout.readline, ""):
print(line, end="")
I've discovered parallel-ssh which seems to have parted company from paramiko and gone for python-ssh/python-ssh2 instead. A 5-minute test suggests that it combines paramiko's power with spur's simplicity, but sadly still doesn't support ~/.ssh/config, so Perl's Net::OpenSSH is still my favourite :-) Here's the code I got working with pssh:-
from pssh.clients import SSHClient
HOST = "rocky.lan"
USER = "rocky"
CMD = "while sleep 1; do date; done"
ssh = SSHClient(host=HOST, user=USER)
cmd = ssh.run_command(CMD)
for line in cmd.stdout:
print(line)
So this is an alternative, but really I still need to know how to read the subprocess's stdout using spur.
Related
I have a script which can run on my host machine and several other servers. I want to launch this script as a background process on my host machine along with the remote machine using ssh and output the stdout/stderr to host machine for my host machine background process and on the remote machines for remote machine background tasks.
I tried with
subprocess.check_output(['python' ,'script.py' ,'arg_1', ' > file.log ', ' & echo -ne $! ']
but it doesn't work. it doesnt give me the pid nor write into the file. It works with shell=True but then I read it is not good to use shell=True for security reasons.
then I tried
p = subprocess.Popen(['python' ,'script.py' ,'arg_1', ' > file.log ']
Now i can get the process pid but the output is not writing in the remote log file.
using stdout/stderr arguments like suggested below will open the log file in my host machine not the remote machine. i want to log on the remote machine instead.
append subprocess.Popen output to file?
Could someone please suggest me a single command that works both on my host machine and also ssh's to remote server and launches the background process there? and write to output file ?
<HOW_TO_GET_PID> = subprocess.<WHAT>( ([] if 'localhost' else ['ssh','<remote_server>']) + ['python', 'script.py', 'arg_1' <WHAT>] )
Someone could please finish the above psudo code ?
Thanks,
You're not going to get something that's safe and correct in a one-liner without making it unreadable; better not to try.
Note that we're using a shell here: In the local case we explicitly call shell=True, whereas in the remote case ssh always, implicitly starts a shell.
import shlex
import subprocess
def startBackgroundCommand(argv, outputFile, remoteHost=None, andGetPID=False):
cmd_str = ' '.join(shlex.quote(word) for word in argv)
if outputFile != None:
cmd_str += ' >%s' % (shlex.quote(outputFile),)
if andGetPID:
cmd_str += ' & echo "$!"'
if remoteHost != None:
p = subprocess.Popen(['ssh', remoteHost, cmd_str], stdout=subprocess.PIPE)
else:
p = subprocess.Popen(cmd_str, stdout=subprocess.PIPE, shell=True)
return p.communicate()[0]
# Run your command locally
startBackgroundCommand(['python', 'script.py', 'arg_1'],
outputFile='file.log', andGetPID=True)
# Or run your command remotely
startBackgroundCommand(['python', 'script.py', 'arg_1'],
remoteHost='foo.example.com', outputFile='file.log', andGetPID=True)
# At the beginning you can even program automatic daemonizing
# Using os.fork(), otherwise, you run it with something like:
# nohup python run_my_script.py &
# This will ensure that it continues running even if SSH connection breaks.
from subprocess import Popen, PIPE, STDOUT
p = Popen(["python", "yourscript.py"], stdout=PIPE, stderr=STDOUT, stdin=PIPE)
p.stdin.close()
log = open("logfile.log", "wb")
log.write(b"PID: %i\n\n" % p.pid)
while 1:
line = p.stdout.readline()
if not line: break
log.write(line)
log.flush()
p.stdout.close()
log.write(b"\nExit status: %i" % p.poll())
log.close()
I am trying to write to a custom program's stdin with paramiko. Here is a minimal (non-)working example:
~/stdin_to_file.py:
#! /usr/bin/python
import time, sys
f = open('/home/me/LOG','w')
while True:
sys.stdin.flush()
data = sys.stdin.read()
f.write(data+'\n\n')
f.flush()
time.sleep(0.01)
Then I do these commands in IPython:
import paramiko
s = paramiko.client.SSHClient
s.load_system_host_keys()
s.connect('myserver')
stdin, stdout, stderr = s.exec_command('/home/me/stdin_to_file.py')
stdin.write('Hello!')
stdin.flush()
Unfortunately, nothing then appears in ~/LOG. However, if I do
$ ~/stdin_to_file.py < some_other_file
The contents of some_other_file appear in ~/LOG.
Can anyone suggest where I've gone wrong? It seems like I'm doing the logical thing. None of these work either:
stdin.channel.send('hi')
using the get_pty parameter
sending the output of cat - to stdin_to_file.py
sys.stdin.read() will keep reading until EOF so in your paramiko script you need to close the stdin (returned from exec_command()). But how?
1. stdin.close() would not work.
According to Paramiko's doc (v1.16):
Warning: To correctly emulate the file object created from a socket’s makefile() method, a Channel and its ChannelFile should be able to be closed or garbage-collected independently. Currently, closing the ChannelFile does nothing but flush the buffer.
2. stdin.channel.close() also has problem.
Since stdin, stdout and stderr all share one single channel, stdin.channel.close() will also close stdout and stderr which is not expected.
3. stdin.channel.shutdown_write()
The correct solution is to use stdin.channel.shutdown_write() which disallows writing to the channel but still allows reading from the channel so stdout.read() and stderr.read() would still work.
See following example to see the difference between stdin.channel.close() and stdin.channel.shutdown_write().
[STEP 101] # cat foo.py
import paramiko, sys, time
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy() )
ssh.connect(hostname='127.0.0.1', username='root', password='password')
cmd = "sh -c 'read v; sleep 1; echo $v'"
stdin, stdout, stderr = ssh.exec_command(cmd)
if sys.argv[1] == 'close':
stdin.write('hello world\n')
stdin.flush()
stdin.channel.close()
elif sys.argv[1] == 'shutdown_write':
stdin.channel.send('hello world\n')
stdin.channel.shutdown_write()
else:
raise Exception()
sys.stdout.write(stdout.read() )
[STEP 102] # python foo.py close # It outputs nothing.
[STEP 103] # python foo.py shutdown_write # This works fine.
hello world
[STEP 104] #
I want to execute a mysqldump in python and provide the password when it is requested from the mysqldump.
Adding the password in the command line is not an option, it must be provided via stdin.
This is what I've done so far:
command = [
'mysqldump',
'-h', mysqlhost,
'-P', mysqlport,
'-u', mysqluser,
'-p',
mysqldb
]
mysqlfile = mysqlpath + "/" + mysqldb + ".sql"
with open(mysqlfile, "w+") as file:
p = subprocess.Popen(command, stdin=subprocess.PIPE, stdout=file)
p.communicate(input=mysqlpass)
p.wait()
But when I execute the code the terminal hangs requesting the password.
Thank you.
You can use pexpect for that. This is modified code as I had to test it, but you get the idea:
import pexpect
command2 = 'mysqldump -h localhost -u root -p xyzzy'
mysqlfile = "/tmp/foo.sql"
with open(mysqlfile, "w+") as file:
p = pexpect.spawn(command2)
p.expect("Enter password: ")
p.sendline("foobar")
q = p.read()
p.wait()
file.write(q)
here "foobar" is my database password.
Hannu
For me, the accepted answer did not solve the problem. Presumably it is related to the python version I am using, which is 3.5.
The difficulties I had:
p.read() was blocking the process (I always killed the script at some point)
The chunk-approach by David Rojo did not block, but .read(1024) returned integers, where strings where expected by file.write(...). I assume this is related to differences in the way unicode is handled in Python 2 and 3, since adding the parameter encoding='utf-8' to pexpect.spawn() gave me the proper results. However, then I had to adapt the writing of the file, s.t. it supports unicode as well.
Another problem with the for chunk in p.read(1024):-approach is, that I experienced the reading to finish before mysqldump finished writing the dump to stdout. I guess that in this case mysqldump was too slow to deliver. I changed my solution, s.t. it waits for EOF.
Note: I just started learning python a couple of days ago, please correct me if my assumptions or conclusions are wrong or misleading.
Code example
The script below is my minimal working example for calling mysqldump and providing the password when mysqldump asks for it:
#!/usr/bin/env python3
import pexpect
import io
cmd = 'mysqldump -u MYSQL_USER -p DATABASES(S)'
sqlfile = "/home/user/test-database-dump.sql"
password = 'secret'
with io.open(sqlfile, 'w', encoding="utf-8") as file:
print('Calling mysqldump...')
p = pexpect.spawn(cmd,encoding='utf-8')
p.expect("Enter password: ")
# Send password to mysqldump
p.sendline(password)
# Capture the dump
print('Reading dump from process and writing it to file...')
while not p.eof():
chunk = p.readline()
file.write(chunk)
print('Finished.')
p.close()
print(p.exitstatus, p.signalstatus)
Related questions that are essentially asking the same thing, but have answers that don't work for me:
Make python enter password when running a csh script
How to interact with ssh using subprocess module
How to execute a process remotely using python
I want to ssh into a remote machine and run one command. For example:
ssh <user>#<ipv6-link-local-addr>%eth0 sudo service fooService status
The problem is that I'm trying to do this through a python script with only the standard libraries (no pexpect). I've been trying to get this to work using the subprocess module, but calling communicate always blocks when requesting a password, even though I supplied the password as an argument to communicate. For example:
proc = subprocess.Popen(
[
"ssh",
"{testUser1}#{testHost1}%eth0".format(**locals()),
"sudo service cassandra status"],
shell=False,
stdin=subprocess.PIPE)
a, b = proc.communicate(input=testPasswd1)
print "a:", a, "b:", b
print "return code: ", proc.returncode
I've tried a number of variants of the above, as well (e.g., removing "input=", adding/removing subprocess.PIPE assignments to stdout and sterr). However, the result is always the same prompt:
ubuntu#<ipv6-link-local-addr>%eth0's password:
Am I missing something? Or is there another way to achieve this using the python standard libraries?
This answer is just an adaptation of this answer by Torxed, which I recommend you go upvote. It simply adds the ability to capture the output of the command you execute on the remote server.
import pty
from os import waitpid, execv, read, write
class ssh():
def __init__(self, host, execute='echo "done" > /root/testing.txt',
askpass=False, user='root', password=b'SuperSecurePassword'):
self.exec_ = execute
self.host = host
self.user = user
self.password = password
self.askpass = askpass
self.run()
def run(self):
command = [
'/usr/bin/ssh',
self.user+'#'+self.host,
'-o', 'NumberOfPasswordPrompts=1',
self.exec_,
]
# PID = 0 for child, and the PID of the child for the parent
pid, child_fd = pty.fork()
if not pid: # Child process
# Replace child process with our SSH process
execv(command[0], command)
## if we havn't setup pub-key authentication
## we can loop for a password promt and "insert" the password.
while self.askpass:
try:
output = read(child_fd, 1024).strip()
except:
break
lower = output.lower()
# Write the password
if b'password:' in lower:
write(child_fd, self.password + b'\n')
break
elif b'are you sure you want to continue connecting' in lower:
# Adding key to known_hosts
write(child_fd, b'yes\n')
else:
print('Error:',output)
# See if there's more output to read after the password has been sent,
# And capture it in a list.
output = []
while True:
try:
output.append(read(child_fd, 1024).strip())
except:
break
waitpid(pid, 0)
return ''.join(output)
if __name__ == "__main__":
s = ssh("some ip", execute="ls -R /etc", askpass=True)
print s.run()
Output:
/etc:
adduser.conf
adjtime
aliases
alternatives
apm
apt
bash.bashrc
bash_completion.d
<and so on>
This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())