Python multiprocessing-like execution over ssh - python

Looking at the basic example from python's multiprocessing docs page:
from multiprocessing import Pool
def f(x):
return x*x
if __name__ == '__main__':
with Pool(5) as p:
print(p.map(f, [1, 2, 3]))
This will execute f in separate processes that are auto-started, but on the local machine.
I see that it has support for remote execution, but that requires the managers to be started manually and also looks to be networking-only (i.e. outside of SSH, with no support for e.g. stdin / stdout serialization or something of sorts).
Is there a way to call python functions (as opposed to executables, as can be done e.g. using paramiko.client.SSHClient.exec_command) on remote hosts via SSH automatically? By "automatically" I mean without needing to manually handle process starting / stopping and communication (serialization of input parameters and return value).

The following code below is an example of how I would execute multiple remote commands concurrently using a multithreading pool. command1 below shows how I would invoke a remote Python function. Note that the full path to the Python interpreter may be required (unless it is in your home directory) since the usual environment PATH variable is not set up since your .bash_profile script will not have been executed.
The return value from invoking foo is "printed" out and thus will be the stdout output response returned by function connect_and_execute_command. This by definition will be a string. If the type returned by foo is a builtin type, such as an int or a dict containing builtin types for its values, then the string representation of the type can be converted back into its "real" type using function ast.literal_eval.
import paramiko
from multiprocessing.pool import ThreadPool
from ast import literal_eval
def execute_command(client, command):
"""
Execute a command with client, which is already connect to some host.
"""
stdin_, stdout_, stderr_ = client.exec_command(command)
return [stdout_.read().decode(), stderr_.read().decode()]
def connect(hostname, username, password=None):
client = paramiko.client.SSHClient()
client.load_system_host_keys()
client.set_missing_host_key_policy(paramiko.client.AutoAddPolicy())
client.connect(hostname=hostname, username=username, password=password)
return client
def connect_and_execute_command(command, hostname, username, password=None):
with connect(hostname, username) as client:
return execute_command(client, command)
command0, host0, user0, password0 = 'ls -l', 'some_host0', 'some_username0', 'some_password0'
command1, host1, user1, password1 = '''~/my_local_python/python -c "from temp import foo; print(foo(6), end='')"''', 'some_host1', 'some_username0', 'some_password1'
requests = ((command0, host0, user0, password0), (command1, host1, user1, password1))
with ThreadPool(len(requests)) as pool:
results = pool.starmap(connect_and_execute_command, requests)
# Convert stdout response from command1:
results[1][0] = literal_eval(results[1][0])
for stdout_response, _ in results:
print(stdout_response, end='')
print()

Related

python subprocess popen execute as different user

I am trying to execute a command in python 3.6 as a different user with popen from subprocess but it will still execute as the user who called the script (i plan to call it as root). I am using threads and therefore it is important that i don't violate the user rights when 2 threads execute in parallel.
proc = subprocess.Popen(['echo $USER; touch myFile.txt'],
shell=True,
env={'FOO':'bar', 'USER':'www-data'},
stdout=subprocess.PIPE)
The example above will still create the myFile.txt with my user_id 1000
I tried different approaches :
tried with as described in Run child processes as different user from a long running Python process by copying the os.environment and changed the user, etc (note this is for python 2)
tried with as described in https://docs.python.org/3.6/library/subprocess.html#popen-constructor by using start_new_session=True
My Last option is to prefix the command with sudo -u username command but i don't think this is the elegant way.
The standard way [POSIX only] would be to use preexec_fn to set gid and uid as described in more detail in this answer
Something like this should do the trick -- for completeness I've also modified your initial snippet to include the other environment variables you'd likely want to set, but just setting preexec_fn should be sufficient for the simple command you are running:
import os, pwd, subprocess
def demote(user_uid, user_gid):
def result():
os.setgid(user_gid)
os.setuid(user_uid)
return result
def exec_cmd(username):
# get user info from username
pw_record = pwd.getpwnam(username)
homedir = pw_record.pw_dir
user_uid = pw_record.pw_uid
user_gid = pw_record.pw_gid
env = os.environ.copy()
env.update({'HOME': homedir, 'LOGNAME': username, 'PWD': os.getcwd(), 'FOO': 'bar', 'USER': username})
# execute the command
proc = subprocess.Popen(['echo $USER; touch myFile.txt'],
shell=True,
env=env,
preexec_fn=demote(user_uid, user_gid),
stdout=subprocess.PIPE)
proc.wait()
exec_cmd('www-data')
Note that you'll also need to be sure the current working directory is accessible (e.g. for writing) by the demoted user since not overriding it explicitly

How to check whether a shell command returned nothing or something

I am writing a script to extract something from a specified path. I am returning those values into a variable. How can i check whether the shell command has returned something or nothing.
My Code:
def any_HE():
global config, logger, status, file_size
config = ConfigParser.RawConfigParser()
config.read('config2.cfg')
for section in sorted(config.sections(), key=str.lower):
components = dict() #start with empty dictionary for each section
#Retrieving the username and password from config for each section
if not config.has_option(section, 'server.user_name'):
continue
env.user = config.get(section, 'server.user_name')
env.password = config.get(section, 'server.password')
host = config.get(section, 'server.ip')
print "Trying to connect to {} server.....".format(section)
with settings(hide('warnings', 'running', 'stdout', 'stderr'),warn_only=True, host_string=host):
try:
files = run('ls -ltr /opt/nds')
if files!=0:
print '{}--Something'.format(section)
else:
print '{} --Nothing'.format(section)
except Exception as e:
print e
I tried checking 1 or 0 and True or false but nothing seems to be working. In some servers, the path '/opt/nds/' does not exist. So in that case, nothing will be there on files. I wanted to differentiate between something returned to files and nothing returned to files.
First, you're hiding stdout.
If you get rid of that you'll get a string with the outcome of the command on the remote host. You can then split it by os.linesep (assuming same platform), but you should also take care of other things like SSH banners and colours from the retrieved outcome.
As perror commented already, the python subprocess module offers the right tools.
https://docs.python.org/2/library/subprocess.html
For your specific problem you can use the check_output function.
The documentation gives the following example:
import subprocess
subprocess.check_output(["echo", "Hello World!"])
gives "Hello World"
plumbum is a great library for running shell commands from a python script. E.g.:
from plumbum.local import ls
from plumbum import ProcessExecutionError
cmd = ls['-ltr']['/opt/nds'] # construct the command
try:
files = cmd().splitlines() # run the command
if ...:
print ...:
except ProcessExecutionError:
# command exited with a non-zero status code
...
On top of this basic usage (and unlike the subprocess module), it also supports things like output redirection and command pipelining, and more, with easy, intuitive syntax (by overloading python operators, such as '|' for piping).
In order to get more control of the process you run, you need to use the subprocess module.
Here is an example of code:
import subprocess
task = subprocess.Popen(['ls', '-ltr', '/opt/nds'], stdout=subprocess.PIPE)
print task.communicate()

Grab output from shell command which is run in the background

I saw some useful information in this post about how you can't expect to run a process in the background if you are retrieving output from it using subprocess. The problem is ... this is exactly what I want to do!
I have a script which drops commands to various hosts via ssh and I don't want to have to wait on each one to finish before starting the next. Ideally, I could have something like this:
for host in hostnames:
p[host] = Popen(["ssh", mycommand], stdout=PIPE, stderr=PIPE)
pout[host], perr[host] = p[host].communicate()
which would have (in the case where mycommand takes a very long time) all of the hosts running mycommand at the same time. As it is now, it appears that the entirety of the ssh command finishes before starting the next. This is (according to the previous post I linked) due to the fact that I am capturing output, right? Other than just cating the output to a file and reading the output later, is there a decent way to make these things happen on various hosts in parallel?
You may want to use fabric for this.
Fabric is a Python (2.5-2.7) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
Example file:
from fabric.api import run, env
def do_mycommand():
my_command = "ls" # change to your command
output = run(mycommand)
print "Output of %s on %s:%s" % (mycommand, env.host_string, output)
Now to execute on all hosts (host1,host2 ... is where all hosts go):
fab -H host1,host2 ... do_mycommand
You could use threads for achieving parallelism and a Queue for retrieving results in a thread-safe way:
import subprocess
import threading
import Queue
def run_remote_async(host, command, result_queue, identifier=None):
if isinstance(command, str):
command = [command]
if identifier is None:
identifier = "{}: '{}'".format(host, ' '.join(command))
def worker(worker_command_list, worker_identifier):
p = subprocess.Popen(worker_command_list,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result_queue.put((worker_identifier, ) + p.communicate())
t = threading.Thread(target=worker,
args=(['ssh', host] + command, identifier),
name=identifier)
t.daemon = True
t.start()
return t
Then, a possible test case could look like this:
def test():
data = [('host1', ['ls', '-la']),
('host2', 'whoami'),
('host3', ['echo', '"Foobar"'])]
q = Queue.Queue()
for host, command in data:
run_remote_async(host, command, q)
for i in range(len(data)):
identifier, stdout, stderr = q.get()
print identifier
print stdout
Queue.get() is blocking, so at this point you can collect one result after another, once the task is completed.

Enter an ssh password using the standard python library (not pexpect)

Related questions that are essentially asking the same thing, but have answers that don't work for me:
Make python enter password when running a csh script
How to interact with ssh using subprocess module
How to execute a process remotely using python
I want to ssh into a remote machine and run one command. For example:
ssh <user>#<ipv6-link-local-addr>%eth0 sudo service fooService status
The problem is that I'm trying to do this through a python script with only the standard libraries (no pexpect). I've been trying to get this to work using the subprocess module, but calling communicate always blocks when requesting a password, even though I supplied the password as an argument to communicate. For example:
proc = subprocess.Popen(
[
"ssh",
"{testUser1}#{testHost1}%eth0".format(**locals()),
"sudo service cassandra status"],
shell=False,
stdin=subprocess.PIPE)
a, b = proc.communicate(input=testPasswd1)
print "a:", a, "b:", b
print "return code: ", proc.returncode
I've tried a number of variants of the above, as well (e.g., removing "input=", adding/removing subprocess.PIPE assignments to stdout and sterr). However, the result is always the same prompt:
ubuntu#<ipv6-link-local-addr>%eth0's password:
Am I missing something? Or is there another way to achieve this using the python standard libraries?
This answer is just an adaptation of this answer by Torxed, which I recommend you go upvote. It simply adds the ability to capture the output of the command you execute on the remote server.
import pty
from os import waitpid, execv, read, write
class ssh():
def __init__(self, host, execute='echo "done" > /root/testing.txt',
askpass=False, user='root', password=b'SuperSecurePassword'):
self.exec_ = execute
self.host = host
self.user = user
self.password = password
self.askpass = askpass
self.run()
def run(self):
command = [
'/usr/bin/ssh',
self.user+'#'+self.host,
'-o', 'NumberOfPasswordPrompts=1',
self.exec_,
]
# PID = 0 for child, and the PID of the child for the parent
pid, child_fd = pty.fork()
if not pid: # Child process
# Replace child process with our SSH process
execv(command[0], command)
## if we havn't setup pub-key authentication
## we can loop for a password promt and "insert" the password.
while self.askpass:
try:
output = read(child_fd, 1024).strip()
except:
break
lower = output.lower()
# Write the password
if b'password:' in lower:
write(child_fd, self.password + b'\n')
break
elif b'are you sure you want to continue connecting' in lower:
# Adding key to known_hosts
write(child_fd, b'yes\n')
else:
print('Error:',output)
# See if there's more output to read after the password has been sent,
# And capture it in a list.
output = []
while True:
try:
output.append(read(child_fd, 1024).strip())
except:
break
waitpid(pid, 0)
return ''.join(output)
if __name__ == "__main__":
s = ssh("some ip", execute="ls -R /etc", askpass=True)
print s.run()
Output:
/etc:
adduser.conf
adjtime
aliases
alternatives
apm
apt
bash.bashrc
bash_completion.d
<and so on>

execute local python script over sshClient() with Paramiko in remote machine

This is my first post in StackOverflow, so I hope to do it the right way! :)
I have this task to do for my new job that needs to connect to several servers and execute a python script in all of them. I'm not very familiar with servers (and just started using paramiko), so I apologize for any big mistakes!
The script I want to run on them modifies the authorized_keys file but to start, I'm trying it with only one server and not yet using the aforementioned script (I don't want to make a mistake and block the server in my first task!).
I'm just trying to list the directory in the remote machine with a very simple function called getDir(). So far, I've been able to connect to the server with paramiko using the basics (I'm using pdb to debug the script by the way):
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
This is the function to get the directory list:
getDir.py
#!/usr/bin/python
import os
import pdb
def get_dir():
pdb.set_trace()
print "Current dir list is:"
for item in os.listdir(os.getcwd()):
print item
While debugging I got the directory list of my local machine instead of the one from the remote machine... is there a way to pass a python function as a parameter through paramiko? I would like to just have the script locally and run it remotely like when you do it with a bash file from ssh with:
ssh -i pth/to/key username#domain.com 'bash -s' < script.sh
so to actually avoid to copy the python script to every machine and then run it from them (I guess with the above command the script would also be copied to the remote machine and then deleted, right?) Is there a way to do that with paramiko.sshClient()?
I have also tried to modify the code and use the standard output of the channel that creates exec_command to list the directory leaving the scripts like:
try_paramiko.py
#!/usr/bin/python
import paramiko
from getDir import get_dir
import pdb
def try_this(server):
pdb.set_trace()
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
i, o, e = ssh.exec_command(getDir())
for line in o.readlines():
print line
for line in e.readlines():
print line
getDir.py
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
But with this, it actually tries to run the local directory list as commands (which actually makes sense they way I have it). I had to convert the list to a string because I was having a TypeError saying that it expects a string or a read-only character buffer, not a list... I know this was a desperate attempt to pass the function... Does anyone know how I could do such thing (pass a local function through paramiko to execute it on a remote machine)?
If you have any corrections or tips on the code, they are very much welcome (actually, any kind of help would be very much appreciated!).
Thanks a lot in advance! :)
You cannot just execute python function through ssh. ssh is just a tunnel with your code on one side (client) and shell on another (server). You should execute shell commands on remote side.
If using raw ssh code is not critical, i suggest fabric as library for writing administration tools. It contains tools for easy ssh handling, file transferring, sudo, parallel execution and other.
I think you might want change the paramaters you're passing into ssh.exec_command Here's an idea:
Instead of doing:
def get_dir():
return ', '.join(os.listdir(os.getcwd()))
i, o, e = ssh.exec_command(getDir())
You might want to try:
i, o, e = ssh.exec_command('pwd')
o.printlines()
And other things to explore:
Writing a bash script or a Python that lives on your servers. You can use Paramiko to log onto the server and executing the script with ssh.exec_command(some_script.sh) or ssh.exec_command(some_script.py)
Paramiko has some FTP/SFTP utilities so you can actually use it to put the script on the server and then execute it.
It is possible to do this by using a here document to feed a module into the remote server's python interpreter.
remotepypath = "/usr/bin/"
# open the module as a text file
with open("getDir.py", "r") as f:
mymodule = f.read()
# setup from OP code
ssh = paramiko.SSHClient()
ssh.load_host_keys("pth/to/known_hosts")
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
my_key = paramiko.RSAKey.from_private_key_file("pth/to/id_rsa")
ssh.connect(server, username = "root", pkey = my_key)
# use here document to feed module into python interpreter
stdin, stdout, stderr = ssh.exec_command("{p}python - <<EOF\n{s}\nEOF".format(p=remotepypath, s=mymodule))
print("stderr: ", stderr.readlines())
print("stdout: ", stdout.readlines())

Categories