I have a Linux box that runs Cisco IOS and need to SSH into it sometimes to reboot it. I've written a batch file that calls on Cygwin. Cygwin then calls on Python+PythonScript.
Batch File:
cd c:\cygwin64\bin
bash --login -i -c "python3 /home/Owner/uccxtesting.py"
Python Script
import pexpect
import time
import sys
server_ip = "10.0.81.104"
server_user = "administrator"
server_pass = "secretpassword"
sshuccx1 = pexpect.spawn('ssh %s#%s' % (server_user, server_ip))
sshuccx1.logfile_read = sys.stdout.buffer
sshuccx1.timeout = 180
sshuccx1.expect('.*password:')
sshuccx1.sendline(server_pass)
sshuccx1.expect('.admin:')
sshuccx1.sendline('utils system restart')
sshuccx1.expect('Enter (yes/no)?')
sshuccx1.sendline('yes')
time.sleep(30)
When I run this, it stops at Enter yes/no. This is what I'm getting:
I've seen plenty of examples of pexpect with expect, but there is some white space out beside the question mark. I just don't know how to tell python to expect it.
There may be a bug:
utils system restart prompts for force restart (https://bst.cisco.com/bugsearch/bug/CSCvw22828)
Replace time.sleep(30) with the following code to answer a possible force restart prompt. If it works, you can get rid of the try...except and print commands that I added for debugging:
try:
index = -1
while index != 0:
sshuccx1.expect_exact(['succeeded', 'force', ], timeout=300)
if index == 2:
print('Forcing restart...')
sshuccx1.sendline('yes')
print('Operation succeeded')
print(str(child.before))
except pexpect.ExceptionPexpect:
e_type, e_value, _ = sys.exc_info()
print('Error: ' + pexpect.ExceptionPexpect(e_type).get_trace())
print(e_type, e_value)
Also, change sshuccx1.expect('Enter (yes/no)?') to sshuccx1.expect_exact('Enter (yes/no)?'). The expect method tries to match a regex pattern, and it may get caught on the parentheses (see https://pexpect.readthedocs.io/en/stable/api/pexpect.html#pexpect.spawn.expect_exact)
Related
Trying to use python to control numerous compiled executables, but running into timeline issues! I need to be able to run two executables simultaneously, and also be able to 'wait' until an executable has finished prior to starting another one. Also, some of them require superuser. Here is what I have so far:
import os
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = "~/Desktop/folder/"
commandA = filename+executable1
commandB = filename+executable2
commandC = filename+executable3
os.system('echo %s | sudo %s; %s' % (sudoPassword, commandA, commandB))
os.system('echo %s | sudo %s' % (sudoPassword, commandC))
print ('DONESIES')
Assuming that os.system() waits for the executable to finish prior to moving to the next line, this should run EXEC1 and EXEC2 simultaneously, and after they finish run EXEC3...
But it doesn't. Actually, it even prints 'DONESIES' in the shell before commandB even finishes...
Please help!
Your script will still execute all 3 commands sequentially. In shell scripts, the semicolon is just a way to put more than one command on one line. It doesn't do anything special, it just runs them one after the other.
If you want to run external programs in parallel from a Python program, use the subprocess module: https://docs.python.org/2/library/subprocess.html
Use subprocess.Popen to run multiple commands in the background. If you just want the program's stdout/err to go to the screen (or get dumped completely) its pretty straight forward. If you want to process the output of the commands... that gets more complicated. You'd likely start a thread per command.
But here is the case that matches your example:
import os
import subprocess as subp
sudoPassword = "PASS"
executable1 = "EXEC1"
executable2 = "EXEC2"
executable3 = "EXEC3"
filename = os.path.expanduser("~/Desktop/folder/")
commandA = os.path.join(filename, executable1)
commandB = os.path.join(filename, executable2)
commandC = os.path.join(filename, executable3)
def sudo_cmd(cmd, password):
p = subp.Popen(['sudo', '-S'] + cmd, stdin=subp.PIPE)
p.stdin.write(password + '\n')
p.stdin.close()
return p
# run A and B in parallel
exec_A = sudo_cmd([commandA], sudoPassword)
exec_B = sudo_cmd([commandB], sudoPassword)
# wait for A before starting C
exec_A.wait()
exec_C = sudo_cmd([commandC], sudoPassword)
# wait for the stragglers
exec_B.wait()
exec_C.wait()
print ('DONESIES')
I'm making a custom shell in Python for a very limited user on a server, who is logged in via ssh with a public key authentication. They need to be able to run ls, find -type d, and cat in specific directories with certain limitations. This works fine if you run something like ssh user#server -i keyfile, because you see the interactive prompt, and can run those commands. However, something like ssh user#server -i keyfile "ls /var/log" doesn't. ssh simply hangs, with no response. By using the -v switch I've found that the connection is succeeding, so the problem is in my shell. I'm also fairly certain that the script isn't even being started, since print sys.argv at the beginning of the program does nothing. Here's the code:
#!/usr/bin/env python
import subprocess
import re
import os
with open(os.devnull, 'w') as devnull:
proc = lambda x: subprocess.Popen(x, stdout=subprocess.PIPE, stderr=devnull)
while True:
try:
s = raw_input('> ')
except:
break
try:
cmd = re.split(r'\s+', s)
if len(cmd) != 2:
print 'Not permitted.'
continue
if cmd[0].lower() == 'l':
# Snip: verify directory
cmd = proc(['ls', cmd[1]])
print cmd.stdout.read()
elif cmd[0].lower() == 'r':
# Snip: verify directory
cmd = proc(['cat', cmd[1]])
print cmd.stdout.read()
elif cmd[0].lower() == 'll':
# Snip: verify directory
cmd = proc(['find', cmd[1], '-type', 'd'])
print cmd.stdout.read()
else:
print 'Not permitted.'
except OSError:
print 'Unknown error.'
And here's the relevant line from ~/.ssh/authorized_keys:
command="/path/to/shell $SSH_ORIGINAL_COMMAND" ssh-rsa [base-64-encoded-key] user#host
How can I make the shell script when the command is passed on the command line so it can be used in scripts without starting an interactive shell?
The problem with ssh not responding is related to the fact that ssh user#host cmd does not open a terminal for the command being run. Try calling ssh user#host -t cmd.
However, even if you pass the -t option, you'd still have another problem with your script: it only works interactively and totally ignores the $SSH_ORIGINAL_PROGRAM being passed. A naive solution would be to check sys.argv and if its bigger than 1 you don't loop forever, and instead only execute whatever command you have in it.
I have a Python routine which invokes some kind of CLI (e.g telnet) and then executes commands in it. The problem is that sometimes the CLI refuses connection and commands are executed in the host shell resulting in various errors. My idea is to check whether the shell prompt alters or not after invoking the CLI.
The question is: how can I get the shell prompt string in Python?
Echoing PS1 is not a solution, because some CLIs cannot run it and it returns a notation-like string instead of the actual prompt:
SC-2-1:~ # echo $PS1
\[\]\h:\w # \[\]
EDIT
My routine:
def run_cli_command(self, ssh, cli, commands, timeout = 10):
''' Sends one or more commands to some cli and returns answer. '''
try:
channel = ssh.invoke_shell()
channel.settimeout(timeout)
channel.send('%s\n' % (cli))
if 'telnet' in cli:
time.sleep(1)
time.sleep(1)
# I need to check the prompt here
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
channel.recv(9999)
if type(commands) is not list:
commands = [commands]
ret = ''
for command in commands:
channel.send("%s\r\n" % (command))
w = 0
while (channel.recv_ready() == False) and (w < timeout):
w += 1
time.sleep(1)
ret += channel.recv(9999) ### The size of read buffer can be a bottleneck...
except Exception, e:
#print str(e) ### for debugging
return None
channel.close()
return ret
Some explanation needs here: the ssh parameter is a paramiko.SSHClient() instance. I use this code to login to a server and from there I call another CLI which can be SSH, telnet, etc.
I’d suggest sending commands that alter PS1 to a known string. I’ve done so when I used Oracle sqlplus from a Korn shell script, as coprocess, to know when to end reading data / output from the last statement I issued. So basically, you’d send:
PS1='end1>'; command1
Then you’d read lines until you see "end1>" (for extra easiness, add a newline at the end of PS1).
I am trying to run a Python program to see if the screen program is running. If it is, then the program should not run the rest of the code. This is what I have and it's not working:
#!/usr/bin/python
import os
var1 = os.system ('screen -r > /root/screenlog/screen.log')
fd = open("/root/screenlog/screen.log")
content = fd.readline()
while content:
if content == "There is no screen to be resumed.":
os.system ('/etc/init.d/tunnel.sh')
print "The tunnel is now active."
else:
print "The tunnel is running."
fd.close()
I know there are probably several things here that don't need to be and quite a few that I'm missing. I will be running this program in cron.
from subprocess import Popen, PIPE
def screen_is_running():
out = Popen("screen -list",shell=True,stdout=PIPE).communicate()[0]
return not out.startswith("This room is empty")
Maybe the error message that you redirect on the first os.system call is written on the standard error instead of the standard output. You should try replacing this line with:
var1 = os.system ('screen -r 2> /root/screenlog/screen.log')
Note the 2> to redirect standard error to your file.
I've just become the system admin for my research group's cluster and, in this respect, am a novice. I'm trying to make a few tools to monitor the network and need help getting started implementing them with python (my native tongue).
For example, I would like to view who is logged onto remote machines. By hand, I'd ssh and who, but how would I get this info into a script for manipulation? Something like,
import remote_info as ri
ri.open("foo05.bar.edu")
ri.who()
Out[1]:
hutchinson tty7 2009-08-19 13:32 (:0)
hutchinson pts/1 2009-08-19 13:33 (:0.0)
Similarly for things like cat /proc/cpuinfo to get the processor information of a node. A starting point would be really great. Thanks.
Here's a simple, cheap solution to get you started
from subprocess import *
p = Popen('ssh servername who', shell=True, stdout=PIPE)
p.wait()
print p.stdout.readlines()
returns (eg)
['usr pts/0 2009-08-19 16:03 (kakapo)\n',
'usr pts/1 2009-08-17 15:51 (kakapo)\n',
'usr pts/5 2009-08-17 17:00 (kakapo)\n']
and for cpuinfo:
p = Popen('ssh servername cat /proc/cpuinfo', shell=True, stdout=PIPE)
I've been using Pexpect, which let's you ssh into machines, send commands, read the output, and react to it, with success. I even started an open-source project around it, Proxpect - which haven't been updated in ages, but I digress...
The pexpect module can help you interface with ssh. More or less, here is what your example would look like.
child = pexpect.spawn('ssh servername')
child.expect('Password:')
child.sendline('ABCDEF')
(output,status) = child.sendline('who')
If your needs overgrow simple "ssh remote-host.example.org who" then there is an awesome python library, called RPyC. It has so called "classic" mode which allows to almost transparently execute Python code over the network with several lines of code. Very useful tool for trusted environments.
Here's an example from Wikipedia:
import rpyc
# assuming a classic server is running on 'hostname'
conn = rpyc.classic.connect("hostname")
# runs os.listdir() and os.stat() remotely, printing results locally
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
If you're interested, there's a good tutorial on their wiki.
But, of course, if you're perfectly fine with ssh calls using Popen or just don't want to run separate "RPyC" daemon, then this is definitely an overkill.
This covers the bases. Notice the use of sudo for things that needed more privileges. We configured sudo to allow those commands for that user without needing a password typed.
Also, keep in mind that you should run ssh-agent to make this "make sense". But all in all, it works really well. Running deploy-control httpd configtest will check the apache configuration on all the remote servers.
#!/usr/local/bin/python
import subprocess
import sys
# The user#host: for the SourceURLs (NO TRAILING SLASH)
RemoteUsers = [
"deploy#host1.example.com",
"deploy#host2.appcove.net",
]
###################################################################################################
# Global Variables
Arg = None
# Implicitly verified below in if/else
Command = tuple(sys.argv[1:])
ResultList = []
###################################################################################################
for UH in RemoteUsers:
print "-"*80
print "Running %s command on: %s" % (Command, UH)
#----------------------------------------------------------------------------------------------
if Command == ('httpd', 'configtest'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd configtest'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'graceful'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd graceful'))
#----------------------------------------------------------------------------------------------
elif Command == ('httpd', 'status'):
CommandResult = subprocess.call(('ssh', UH, 'sudo /sbin/service httpd status'))
#----------------------------------------------------------------------------------------------
elif Command == ('disk', 'usage'):
CommandResult = subprocess.call(('ssh', UH, 'df -h'))
#----------------------------------------------------------------------------------------------
elif Command == ('uptime',):
CommandResult = subprocess.call(('ssh', UH, 'uptime'))
#----------------------------------------------------------------------------------------------
else:
print
print "#"*80
print
print "Error: invalid command"
print
HelpAndExit()
#----------------------------------------------------------------------------------------------
ResultList.append(CommandResult)
print
###################################################################################################
if any(ResultList):
print "#"*80
print "#"*80
print "#"*80
print
print "ERRORS FOUND. SEE ABOVE"
print
sys.exit(0)
else:
print "-"*80
print
print "Looks OK!"
print
sys.exit(1)
Fabric is a simple way to automate some simple tasks like this, the version I'm currently using allows you to wrap up commands like so:
run('whoami', fail='ignore')
you can specify config options (config.fab_user, config.fab_password) for each machine you need (if you want to automate username password handling).
More info on Fabric here:
http://www.nongnu.org/fab/
There is a new version which is more Pythonic - I'm not sure whether that is going to be better for you int his case... works fine for me at present...