Unix `at` scheduling with python script: Permission denied - python

I'm trying to create a scheduled task using the Unix at command. I wanted to run a python script, but quickly realized that at is configured to use run whatever file I give it with sh. In an attempt to circumvent this, I created a file that contained the command python mypythonscript.py and passed that to at instead.
I have set the permissions on the python file to executable by everyone (chmod a+x), but when the at job runs, I am told python: can't open file 'mypythonscript.py': [Errno 13] Permission denied.
If I run source myshwrapperscript.sh, the shell script invokes the python script fine. Is there some obvious reason why I'm having permissions problems with at?
Edit: I got frustrated with the python script, so I went ahead and made a sh script version of the thing I wanted to run. I am now finding that the sh script returns to me saying rm: cannot remove <filename>: Permission denied (this was a temporary file I was creating to store intermediate data). Is there anyway I can authorize these operations with my own credentials, despite not having sudo access? All of this works perfectly when I run it myself, but everything seems to go to shit when I have at do it.

Start the script using python not the actual script name, ex : python path/to/script.py.
at tries to run everything as a sh script.

EDIT: The at command tries running everything as a list of shell commands. So you should start your script like this:
at now + 1 minute < python mypythonscript.py
In this case, the #! line at the beginning of the script is not necessary.

I have been working on task scheduling between servers and clients recently. I just abstracted out my scheduling code and put it up on Github. It was meant to schedule several simulations across multiple machines that have all simulations in their filesystems. The idea is that since each machine had a different processor, it would compute each simulation, scp the results back into the server and request the server for the next simulation. The server responds by scheduling a task on the client to run the next unrun simulation
Hope this will help you.
NOTE: Since I only abstracted and uploaded the files about 5 minutes ago, I haven't had the chance to test the abstractions. However, if you come across any bugs, please let me know and I'll debug then as soon as I can.
Github seems to be down now. So here are the files that you'll need:
On the server:
serverside
#!/bin/bash
projectDir=~/
minute=`atq | sort -t" " -k1 -nr | head -n1 | cut -d' ' -f4 | cut -d":" -f1,2`
curr=`date | cut -d' ' -f4 | cut -d':' -f1,2`
time=`python -c "import sys; hour,minute=map(int,max(sys.argv[1:]).split(':')); minute += 2; hour, minute = [(hour,minute), ((hour+1)%24,minute%60)][minute>=60]; print '%d:%02d'%(hour, minute)" "$minute" "$curr"`
cat <<EOF | at "$time"
python $projectDir/serverside.py $1
EOF
serverside.py
import sys
import time
import smtplib
import subprocess
import os
import itertools
IP = sys.argv[1].strip()
PROJECT_DIR = "" # relative path (relative to the home directory) to the root directory of the project, which contains all subdirs containing simulation files
USERS = { # keys are IPs of the clients, values are user names on those clients
}
HOMES = { # keys are the IPs of clients, values are the absolute paths to the home directories on these clients for the usernames on these clients identified in USERS
}
HOME = None # absolute path to the home directory on the server
SMTP_SERVER = ""
SMTP_PORT = None
FROM_ADDR = None # the email address from which notification emails will be sent
TO_ADDR = None # the email address to which notification emails will be sent
def get_next_simulation():
""" This function returns a list.
The list contains N>0 elements.
Each of the first N-1 elements are names of directories (not paths), which when joined together form a relative path (relative from PROJECT_DIR).
The Nth element is the name of the file - the simulation to be run.
Before the end user implements this function, it is assumed that N=3.
Once this function has been implemented, if N!=3, change the code in the lines annotated with "Change code for N in this line"
Also look for this annotation in clientside.py and clientsideexec """
pass
done = False
DIR1, DIR2, FILENAME = get_next_simulation() # Change code for N in this line
while not done:
try:
subprocess.check_call("""ssh %(user)s#%(host)s 'sh %(home)s/%(project)/clientside %(dir1)s %(dir2)s %(filename)s %(host)s' """ %{'user':USER, 'host':IP, 'home':HOME[IP], 'project':PRJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME}, shell=True) # Change code for N in this line
done = True
os.remove("%(home)s/%(project)/%(dir1)s/%(dir2)s/%(filename)s" %{'home':HOME, 'project':PROJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME}) # Change code for N in this line
sm = smtplib.SMTP(SMTP_SERVER, SMTP_PORT)
sm.sendmail(FROM_ADDR, TO_ADDR, "running %(project)s/%(dir1)s/%(dir2)s/%(filename)s on %(host)s" %{'project':PROJECT_DIR, 'dir1':DIR1, 'dir2':DIR2, 'filename':FILENAME, 'host':IP}) # Change code for N in this line
except:
pass
On the client:
clientside
#!/bin/bash
projectpath=~/
python $projectpath/clientside.py "$#"
clientside.py
import subprocess
import sys
import datetime
import os
DIR1, DIR2, FILENAME, IP = sys.argv[1:]
try:
subprocess.check_call("sh ~/cisdagp/clientsideexec %(dir1)s %(dir2)s %(filename)s %(ip)s" %{'dir1':, 'dir2':, 'filename':, ip':IP}, shell=True, executable='/bin/bash') # Change code for N in this line
except:
pass
clientsideexec
#!/bin/bash
projectpath=~/
user=''
serverIP=''
SMTP_SERVER=''
SMTP_PORT=''
FROM_ADDR=''
TO_ADDR=''
MESSAGE=''
cat <<EOF | at now + 2 minutes
cd $projectpath/$1/$2 # Change code for N in this line
sh $3
# copy the logfile back to the server
scp logfile$3 $user#$serverIP:$projectpath/$1/$2/
cd $projectpath
python -c "import smtplib; sm = smtplib.SMTP('$SMTP_SERVER', $SMTP_PORT); sm.sendmail('$FROM_ADDR', '$TO_ADDR', '$MESSAGE')"
python clientsiderequest.py
EOF

Could you try: echo 'python mypythonscript.py' | at ...

Related

Attempt to generate email with new lines from Python through Bash mailx command

Where I work, we have servers that are pre-configured for the use of the bash mail command to send attachments and messages. I'm working on a notification script that will monitor server activity and generate an email if it detects an issue. I'm using the subprocess.call function in order to send a bash command.
I am successful in sending messages, but in the body portion of the email, it is stringing each notification line together rather than putting each notification on a separate line. I have tried to append each line within the string with "\n" and "\r\n". I have to use double backslashes as python will interpret this as literal new lines when it sends the echo command. I also passed the command "shopt -s xpg_echo" before using the echo with pipe to mail using the double backspaces but this also had no effect. I also tried using echo without the "-e" option and this had no effect either.
The trick is that I need python to send the new line to bash and then somehow get bash to interpret this as a new line using echo piped through to mail. Here is a sample of the code:
import os
import shutil
import sys
import time
import re
import subprocess
import smtplib
serviceports["SCP Test"] = ["22"]
serviceports["Webtier"] = ["9282"]
bashCommand = "netstat -an | grep LISTEN | grep -v LISTENING"
netstat_results = subprocess.check_output(bashCommand, shell=True)
netstat_results = str(netstat_results)
#Iterate through all ports for each service and assign down ports to variable
for servicename, ports in serviceports.items():
for ind_port in ports:
ind_port_chk = ":" + ind_port
count = sum(1 for _ in re.finditer(r'\b%s\b' % re.escape(ind_port_chk), netstat_results))
if count == 0:
warning = servicename + " on port " + ind_port + " is currently down!"
report.append(warning)
for warning in report:
message = message + warning + "\\\n"
fromaddr=serveridsimp + "#xxxxx.com"
toaddr='email#xxxxx.com'
subject="Testing..."
body=message
cmd= cmd='echo -e '+body+' | mail -s '+subject+' -r '+fromaddr+' '+toaddr
send=subprocess.call(cmd,shell=True)
The code runs a netstat command and assigns it to a string. The code will then iterate through the specified ports and search for where that port doesn't exist in the netstat string (netstat_results). It then will create a list object (warning) containing all the ports not located in netstat_results and then append each line adding \n to a string called "message". It then sends an echo piped to the xmail command to generate an email to be sent containing all the ports not found. What happens currently is that I will get an email saying something like this:
SCP Test on port 22 is currently down!nOHS Webtier on port 9282 is currently down!n etc...
I want it to put each message on a new line like so:
SCP Test on port 22 is currently down!
Webtier on port 9282 is currently down!
I am trying to avoid writing the output to a file and then using bash to read it back into the mail command. Is this possible without having to create a file?
I was finally able to fix the issue by changing the command sent to bash and character being appended to the following:
message = message + warning + "\n"
cmd= cmd='echo -e '+'"'+body+'"'+'|awk \'{ print $0" " }\''+' | mail -s '+'"'+subject+'"'+' -r '+fromaddr+' '+toaddr

How do I change the hostname using Python on a Raspberry Pi

I tried using (going from memory, this may not be 100% accurate):
import socket
socket.sethostname("NewHost")
I got a permissions error.
How would I approach this entirely from within the Python program?
If you only need to do change the hostname until the next reboot, many linux system can change it with:
import subprocess
subprocess.call(['hostname', 'newhost'])
or with less typing but some potential pitfalls:
import os
os.system('hostname %s' % 'newhost')
I wanted to change the hostname permanently, which required making changes in a few places, so I made a shell script:
#!/bin/bash
# /usr/sbin/change_hostname.sh - program to permanently change hostname. Permissions
# are set so that www-user can `sudo` this specific program.
# args:
# $1 - new hostname, should be a legal hostname
sed -i "s/$HOSTNAME/$1/g" /etc/hosts
echo $1 > /etc/hostname
/etc/init.d/hostname.sh
hostname $1 # this is to update the current hostname without restarting
In Python, I ran the script with subprocess.run:
subprocess.run(
['sudo', '/usr/sbin/change_hostname.sh', newhostname])
This was happening from a webserver which was running as www-data, so I allowed it to sudo this specific script without a password. You can skip this step and run the script without sudo if you're running as root or similar:
# /etc.d/sudoers.d/099-www-data-nopasswd-hostname
www-data ALL = (root) NOPASSWD: /usr/sbin/change_hostname.sh
Here is a different approach
import os
def setHostname(newhostname):
with open('/etc/hosts', 'r') as file:
# read a list of lines into data
data = file.readlines()
# the host name is on the 6th line following the IP address
# so this replaces that line with the new hostname
data[5] = '127.0.1.1 ' + newhostname
# save the file temporarily because /etc/hosts is protected
with open('temp.txt', 'w') as file:
file.writelines( data )
# use sudo command to overwrite the protected file
os.system('sudo mv temp.txt /etc/hosts')
# repeat process with other file
with open('/etc/hostname', 'r') as file:
data = file.readlines()
data[0] = newhostname
with open('temp.txt', 'w') as file:
file.writelines( data )
os.system('sudo mv temp.txt /etc/hostname')
#Then call the def
setHostname('whatever')
At the next reboot the hostname will be set to the new name

Logging last Bash command to file from script

I write lots of small scripts to manipulate files on a Bash-based server. I would like to have a mechanism by which to log which commands created which files in a given directory. However, I don't just want to capture every input command, all the time.
Approach 1: a wrapper script that uses a Bash builtin (a la history or fc -ln -1) to grab the last command and write it to a log file. I have not been able to figure out any way to do this, as the shell builtin commands do not appear to be recognized outside of the interactive shell.
Approach 2: a wrapper script that pulls from ~/.bash_history to get the last command. This, however, requires setting up the Bash shell to flush every command to history immediately (as per this comment) and seems also to require that the history be allowed to grow inexorably. If this is the only way, so be it, but it would be great to avoid having to edit the ~/.bashrc file on every system where this might be implemented.
Approach 3: use script. My problem with this is that it requires multiple commands to start and stop the logging, and because it launches its own shell it is not callable from within another script (or at least, doing so complicates things significantly).
I am trying to figure out an implementation that's of the form log_this.script other_script other_arg1 other_arg2 > file, where everything after the first argument is logged. The emphasis here is on efficiency and minimizing syntax overhead.
EDIT: iLoveTux and I both came up with similar solutions. For those interested, my own implementation follows. It is somewhat more constrained in its functionality than the accepted answer, but it also auto-updates any existing logfile entries with changes (though not deletions).
Sample usage:
$ cmdlog.py "python3 test_script.py > test_file.txt"
creates a log file in the parent directory of the output file with the following:
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
Additional file changes are added to the log;
$ cmdlog.py "python3 test_script.py > test_file_2.txt"
the log now contains
2015-10-12#10:47:09 test_file.txt "python3 test_script.py > test_file.txt"
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
Running on the original file name again changes the file order in the log, based on modification time of the files:
$ cmdlog.py "python3 test_script.py > test_file.txt"
produces
2015-10-12#10:47:44 test_file_2.txt "python3 test_script.py > test_file_2.txt"
2015-10-12#10:48:01 test_file.txt "python3 test_script.py > test_file.txt"
Full script:
#!/usr/bin/env python3
'''
A wrapper script that will write the command-line
args associated with any files generated to a log
file in the directory where the files were made.
'''
import sys
import os
from os import listdir
from os.path import isfile, join
import subprocess
import time
from datetime import datetime
def listFiles(mypath):
"""
Return relative paths of all files in mypath
"""
return [join(mypath, f) for f in listdir(mypath) if
isfile(join(mypath, f))]
def read_log(log_file):
"""
Reads a file history log and returns a dictionary
of {filename: command} entries.
Expects tab-separated lines of [time, filename, command]
"""
entries = {}
with open(log_file) as log:
for l in log:
l = l.strip()
mod, name, cmd = l.split("\t")
# cmd = cmd.lstrip("\"").rstrip("\"")
entries[name] = [cmd, mod]
return entries
def time_sort(t, fmt):
"""
Turn a strftime-formatted string into a tuple
of time info
"""
parsed = datetime.strptime(t, fmt)
return parsed
ARGS = sys.argv[1]
ARG_LIST = ARGS.split()
# Guess where logfile should be put
if (">" or ">>") in ARG_LIST:
# Get position after redirect in arg list
redirect_index = max(ARG_LIST.index(e) for e in ARG_LIST if e in ">>")
output = ARG_LIST[redirect_index + 1]
output = os.path.abspath(output)
out_dir = os.path.dirname(output)
elif ("cp" or "mv") in ARG_LIST:
output = ARG_LIST[-1]
out_dir = os.path.dirname(output)
else:
out_dir = os.getcwd()
# Set logfile location within the inferred output directory
LOGFILE = out_dir + "/cmdlog_history.log"
# Get file list state prior to running
all_files = listFiles(out_dir)
pre_stats = [os.path.getmtime(f) for f in all_files]
# Run the desired external commands
subprocess.call(ARGS, shell=True)
# Get done time of external commands
TIME_FMT = "%Y-%m-%d#%H:%M:%S"
log_time = time.strftime(TIME_FMT)
# Get existing entries from logfile, if present
if LOGFILE in all_files:
logged = read_log(LOGFILE)
else:
logged = {}
# Get file list state after run is complete
post_stats = [os.path.getmtime(f) for f in all_files]
post_files = listFiles(out_dir)
# Find files whose states have changed since the external command
changed = [e[0] for e in zip(all_files, pre_stats, post_stats) if e[1] != e[2]]
new = [e for e in post_files if e not in all_files]
all_modded = list(set(changed + new))
if not all_modded: # exit early, no need to log
sys.exit(0)
# Replace files that have changed, add those that are new
for f in all_modded:
name = os.path.basename(f)
logged[name] = [ARGS, log_time]
# Write changed files to logfile
with open(LOGFILE, 'w') as log:
for name, info in sorted(logged.items(), key=lambda x: time_sort(x[1][1], TIME_FMT)):
cmd, mod_time = info
if not cmd.startswith("\""):
cmd = "\"{}\"".format(cmd)
log.write("\t".join([mod_time, name, cmd]) + "\n")
sys.exit(0)
You can use the tee command, which stores its standard input to a file and outputs it on standard output. Pipe the command line into tee, and pipe tee's output into a new invocation of your shell:
echo '<command line to be logged and executed>' | \
tee --append /path/to/your/logfile | \
$SHELL
i.e., for your example of other_script other_arg1 other_arg2 > file,
echo 'other_script other_arg1 other_arg2 > file' | \
tee --append /tmp/mylog.log | \
$SHELL
If your command line needs single quotes, they need to be escaped properly.
OK, so you don't mention Python in your question, but it is tagged Python, so I figured I would see what I could do. I came up with this script:
import sys
from os.path import expanduser, join
from subprocess import Popen, PIPE
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
log_file = join(home, "command_log")
command = sys.argv[1:]
with open(log_file, "a") as fout:
fout.write("{}\n".format(" ".join(command)))
out, err = issue_command(command)
which you can call like (if you name it log_this and make it executable):
$ log_this echo hello world
and it will put "echo hello world" in a file ~/command_log, note though that if you want to use pipes or redirection you have to quote your command (this may be a real downfall for your use case or it may not be, but I haven't figured out how to do this just yet without the quotes) like this:
$ log_this "echo hello world | grep h >> /tmp/hello_world"
but since it's not perfect, I thought I would add a little something extra.
The following script allows you to specify a different file to log your commands to as well as record the execution time of the command:
#!/usr/bin/env python
from subprocess import Popen, PIPE
import argparse
from os.path import expanduser, join
from time import time
def issue_command(command):
process = Popen(command, stdout=PIPE, stderr=PIPE, shell=True)
return process.communicate()
home = expanduser("~")
default_file = join(home, "command_log")
parser = argparse.ArgumentParser()
parser.add_argument("-f", "--file", type=argparse.FileType("a"), default=default_file)
parser.add_argument("-p", "--profile", action="store_true")
parser.add_argument("command", nargs=argparse.REMAINDER)
args = parser.parse_args()
if args.profile:
start = time()
out, err = issue_command(args.command)
runtime = time() - start
entry = "{}\t{}\n".format(" ".join(args.command), runtime)
args.file.write(entry)
else:
out, err = issue_command(args.command)
entry = "{}\n".format(" ".join(args.command))
args.file.write(entry)
args.file.close()
You would use this the same way as the other script, but if you wanted to specify a different file to log to just pass -f <FILENAME> before your actual command and your log will go there, and if you wanted to record the execution time just provide the -p (for profile) before your actual command like so:
$ log_this -p -f ~/new_log "echo hello world | grep h >> /tmp/hello_world"
I will try to make this better, but if you can think of anything else this could do for you, I am making a github project for this where you can submit bug reports and feature requests.

permanently change directory python scripting/what environment do python scripts run in?

I have a small git_cloner script that clones my companies projects correctly. In all my scripts, I use a func that hasn't given me problems yet:
def call_sp(
command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
At the end of this individual script, I use:
call_sp('cd {}'.format(branch_path))
This line does not change the terminal I ran my script in to the directory branch_path, in fact, even worse, it annoyingly asks me for my password! When removing the cd yadayada line above, my script no longer demands a password before completing. I wonder:
How are these python scripts actually running? Since the cd command had no permanent effect. I assume the script splits its own private subprocess separate from what the terminal is doing, then kills itself when the script finishes?
Based on how #1 works, how do I force my scripts to change the terminal directory permanently to save me time,
Why would merely running a change directory ask me for my password?
The full script is below, thank you,
Cody
#!/usr/bin/env python
import subprocess
import sys
import time
from os.path import expanduser
home_path = expanduser('~')
project_path = home_path + '/projects'
d = {'cwd': ''}
#calling from script:
# ./git_cloner.py projectname branchname
# to make a new branch say ./git_cloner.py project branchname
#interactive:
# just run ./git_cloner.py
if len(sys.argv) == 3:
project = sys.argv[1]
branch = sys.argv[2]
if len(sys.argv) < 3:
while True:
project = raw_input('Enter a project name (i.e., mainworkproject):\n')
if not project:
continue
break
while True:
branch = raw_input('Enter a branch name (i.e., dev):\n')
if not branch:
continue
break
def call_sp(command, **arg_list):
p = subprocess.Popen(command, shell=True, **arg_list)
p.communicate()
print "making new branch \"%s\" in project \"%s\"" % (branch, project)
this_project_path = '%s/%s' % (project_path, project)
branch_path = '%s/%s' % (this_project_path, branch)
d['cwd'] = project_path
call_sp('mkdir %s' % branch, **d)
d['cwd'] = branch_path
git_string = 'git clone ssh://git#git/home/git/repos/{}.git {}'.format(project, d['cwd'])
#see what you're doing to maybe need to cancel
print '\n'
print "{}\n\n".format(git_string)
call_sp(git_string)
time.sleep(30)
call_sp('git checkout dev', **d)
time.sleep(2)
call_sp('git checkout -b {}'.format(branch), **d)
time.sleep(5)
#...then I make some symlinks, which work
call_sp('cp {}/dev/settings.py {}/settings.py'.format(project_path, branch_path))
print 'dont forget "git push -u origin {}"'.format(branch)
call_sp('cd {}'.format(branch_path))
You cannot use Popen to change the current directory of the running script. Popen will create a new process with its own environment. If you do a cd within that, it will change directory for that running process, which will then immediately exit.
If you want to change the directory for the script you could use os.chdir(path), then all subsequent commands in the script will be run from that new path.
Child processes cannot alter the environment of their parents though, so you can't have a process you create change the environment of the caller.

execute local python script over sshClient() with Paramiko in remote machine

This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())

Categories