I would like to be able to control a remote Python interpreter over an SSH connection, and drive it from Python itself.
I've got a basic template:
ssh.connect(servername, serverport, username, key_filename=key_filename)
transport = ssh.get_transport()
channel = transport.open_session()
channel.exec_command(PATH_TO_EXEC)
while True:
r, w, e = select.select([channel], [], [], 1)
if channel in r:
try:
if channel.recv_ready():
x = channel.recv(64)
elif channel.recv_stderr_ready():
x = channel.recv_stderr(64)
else:
continue
if len(x) == 0:
print '\r\n*** EOF\r\n',
break
sys.stdout.write(x)
sys.stdout.flush()
except socket.timeout:
pass
which allows me to talk to the remote application with pdb: channel.set("command\n").
It works perfectly with bash, with gdb, but there is nothing I can do to get an output stream from python (v2)
How does Python handle its output stream, why my code doesn't work with it?
Unless this is an academic exercise, or you have some specific requirement to use ssh, have a look at pushy. I've never used it but it seems mature.
Depending from what your goal is, you could follow one of these two ways (I'm sure there are several other alternatives, though!).
If you want to control execution of scripts on remote machines via python you could try Fabric. From their website:
Fabric is a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. It provides a basic suite of operations for executing local or remote shell commands (normally or via sudo) and uploading/downloading files, as well as auxiliary functionality such as prompting the running user for input, or aborting execution.
If you want to control remote processes and integrate their output into the flow of your main program, you could use the multiprocessing module. From PEP 371:
The package also provides server and client functionality (processing.Manager) to provide remote sharing and management of objects and tasks so that applications may not only leverage multiple cores on the local machine, but also distribute objects and tasks across a cluster of networked machines.
Related
I am trying to create a program that creates multiple droplets, sends a script to each droplet, and initiates the execution all of the scripts without waiting for the output. I have tried to run it in the background, using nohup so that it isn't killed off when disconnected from terminal with the following code:
for i in len(script_names):
c = Connection(host = host[i], user = user[i], connect_kwargs = {"password" : password, "key_filename" : key_filename})
c.run("nohup python3 /root/" + script_names[i] + " &")
I have tried other variations of the same idea, including setting "pty=False", redirecting the output to dev/null with "> /dev/null < /dev/null &" yet nothing seems to work.
Is it possible to issue multiple commands to run scripts on different hosts concurrently without waiting for the output with fabric? Or should I use another package?
Fabric 2.x's groups aren't fully fleshed out yet, so they aren't well suited for this use case. In fabric 1.x I would accomplish this by using a dictionary for script_names where the keys are the host strings from your host list and the values are the names from script_names currently. Then I'd have each task perform its run commands in parallel as usual, looking up values using fabric.api.env.host_string within the task. The execution layer of fabric 2.x does not yet support this use-case afaik. This was my attempt at hacking it in, but the author rightly pointed out that this functionality should be handled in an Executor, which I could not come up with a solution for at the time: https://github.com/fabric/fabric/pull/1595
I just bought a server and was wondering if there was a way to run the code remotely but store/display the results locally. For example, I write some code to display a graph, the positions on the graph are computed by the (remote) server, but the graph is displayed on the (local) tablet.
I would like to do this because the tablet I carry around with me on a day-to-day basis is very slow for computational physics simulations. I understand that I can setup some kind of communications protocol that allows the server to compute things and then sends the computations to my tablet for a script on my tablet to handle the data. However, I would like to avoid writing a possibly new set of communications scripts (to handle different formats of data) every single time I run a new simulation.
This is a complete "The Russians used a pencil" solution, but have you considered running a VNC server on the machine that is doing the computations?
You could install a VNC client onto your tablet/phone/PC and view it that way, there are tons of them available. No need to go about creating anything from scratch.
With ssh, you can do this with a python script or a shell script.
ssh machine_name "python" < ~/script/path/script.py
As the OP indicated in the comments that he wants to interact with the script on the remote machine, I have made some change here.
Copy the python or shell script to the remote machine. This can be done in several ways. For example with scp. But also, with ssh, like here:
ssh machine_name bash -c "cat > /tmp/script.py" < ~/script/path/script.py
Interact with the script on the remote machine
ssh machine_name python -u /tmp/script.py
You should be able to interact with your script running in the remote machine now!
Notice the use of -u to set stdin/stdout of python in unbuffered mode. This is needed to be able to interact with the script.
-u Force stdin, stdout and stderr to be totally unbuffered. On systems where it matters, also put stdin,
stdout and stderr in binary mode. Note that there is internal buffering in xreadlines(), readlines() and
file-object iterators ("for line in sys.stdin") which is not influenced by this option. To work around
this, you will want to use "sys.stdin.readline()" inside a "while 1:" loop.
Here is an example.
The code, which was copied to the server:
#!/usr//bin/env python3
while True:
value = input("Please enter the value: ")
if value != "bye":
print("Input received from the user is: ", value)
else:
print("Good bye!!")
break
Interactive session:
$ ssh machine_name python -u python/pyecho.py
Please enter the value: 123
Input received from the user is: 123
Please enter the value: bye
Good bye!!
REF:
https://unix.stackexchange.com/questions/87405/how-can-i-execute-local-script-on-remote-machine-and-include-arguments
Feedback in the comments below.
I'm writing software that runs a bunch of different programs (via twisted's twistd); that is N daemons of various kinds must be started across multiple machines. If I did this manually, I would be running commands like twistd foo_worker, twistd bar_worker and so on on the machines involved.
Basically there will be a list of machines, and the daemon(s) I need them to run. Additionally, I need to shut them all down when the need arises.
If I were to program this from scratch, I would write a "spawner" daemon that would run permanently on each machine in the cluster with the following features accessible through the network for an authenticated administrator client:
Start a process with a given command line. Return a handle to manage it.
Kill a process given a handle.
Optionally, query stuff like cpu time given a handle.
It would be fairly trivial to program the above, but I cannot imagine this is a new problem. Surely there are existing solutions to doing exactly this? I do however lack experience with server administration, and don't even know what the related terms are.
What existing ways are there to do this on a linux cluster, and what are some of the important terms involved? Python specific solutions are welcome, but not necessary.
Another way to put it: Given a bunch of machines in a lan, how do I programmatically work with them as a cluster?
The most familiar and universal way is just to use ssh. To automate you could use fabric.
To start foo_worker on all hosts:
$ fab all_hosts start:foo_worker
To stop bar_worker on a particular list of hosts:
$ fab -H host1,host2 stop:bar_worker
Here's an example fabfile.py:
from fabric.api import env, run, hide # pip install fabric
def all_hosts():
env.hosts = ['host1', 'host2', 'host3']
def start(daemon):
run("twistd --pid %s.pid %s" % (daemon, daemon))
def stop(daemon):
run("kill %s" % getpid(daemon))
def getpid(daemon):
with hide('stdout'):
return run("cat %s.pid" % daemon)
def ps(daemon):
"""Get process info for the `daemon`."""
run("ps --pid %s" % getpid(daemon))
There are a number of ways to configure host lists in fabric, with scopes varying from global to per-task, and it’s possible mix and match as needed..
To streamline the process management on a particular host you could write initd scripts for the daemons (and run service daemon_name start/stop/restart) or use supervisord (and run supervisorctl e.g., supervisorctl stop all). To control "what installed where" and to push configuration in a centralized manner something like puppet could be used.
The usual tool is a batch queue system, such as SLURM, SGE, Torque/Moab, LSF, and so on.
Circus :
Documentation :
http://docs.circus.io/en/0.5/index.html
Code:
http://pypi.python.org/pypi/circus/0.5
Summary from the documentation :
Circus is a process & socket manager. It can be used to monitor and control processes and sockets.
Circus can be driven via a command-line interface or programmatically trough its python API.
It shares some of the goals of Supervisord, BluePill and Daemontools. If you are curious about what Circus brings compared to other projects, read Why should I use Circus instead of X ?.
Circus is designed using ZeroMQ http://www.zeromq.org/. See Design for more details.
Can you recommend on a python tool / module that allows scheduling tasks on remote machine in a network?
Note that the solution must be able to not only run certain jobs/commands on remote machines, but also verify that jobs etc are still running (for example, consider the case where a machine dies after a task has been assigned to it?)
RPyC or Remote Python Call, is a transparent and symmetrical python library for remote procedure calls, clustering and distributed-computing. Here an example from Wikipedia:
import rpyc
conn = rpyc.classic.connect("hostname") # assuming a classic server is running on 'hostname'
print conn.modules.sys.path
conn.modules.sys.path.append("lucy")
print conn.modules.sys.path[-1]
# a version of 'ls' that runs remotely
def remote_ls(path):
ros = conn.modules.os
for filename in ros.listdir(path):
stats = ros.stat(ros.path.join(path, filename))
print "%d\t%d\t%s" % (stats.st_size, stats.st_uid, filename)
remote_ls("/usr/bin")
# and exceptions...
try:
f = conn.builtin.open("/non/existent/file/name")
except IOError:
pass
To check if the remote server has died after assigning it a job, you can use the ping method of the Connection class. The complete API is described here.
Fabric (http://docs.fabfile.org/en/1.0.1/index.html) is a pretty good toolkit for various sys admin and deployment tasks. It comes with a few pre defined tasks but also gives you the flexibility to add what you need.
I highly recommend it.
Should be able to use Python WMI, for *NIX based systems it's a wrap around SSH and CRON.
I'm working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python functions which take a long time to calculate on workstations, ideally they would like to be able to call some functions on a remote powerful server, but have it appear to be running locally.
Python has an old function called "apply" - it's mostly useless these days now that python supports the extended-call syntax (e.g. **arguments), however I need to implement something that works a bit like this:
rapply = Rapply( server_hostname ) # Set up a connection
result = rapply( fn, args, kwargs ) # Remotely call the function
assert result == fn( *args, **kwargs ) #Just as a test, verify that it has the expected value.
Rapply should be a class which can be used to remotely execute some arbitrary code (fn could be literally anything) on a remote server. It will send back the result which the rapply function will return. The "result" should have the same value as if I had called the function locally.
Now let's suppose that fn is a user-provided function I need some way of sending it over the wire to the execution server. If I could guarantee that fn was always something simple it could could just be a string containing python source code... but what if it were not so simple?
What if fn might have local dependencies: It could be a simple function which uses a class defined in a different module, is there a way of encapsulating fn and everything that fn requires which is not standard-library? An ideal solution would not require the users of this system to have much knowledge about python development. They simply want to write their function and call it.
Just to clarify, I'm not interested in discussing what kind of network protocol might be used to implement the communication between the client & server. My problem is how to encapsulate a function and its dependencies as a single object which can be serialized and remotely executed.
I'm also not interested in the security implications of running arbitrary code on remote servers - let's just say that this system is intended purely for research and it is within a heavily firewalled environment.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Syntax:
cat ./test.py | sshpass -p 'password' ssh user#remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script.
2) sshpass used to pass the ssh password to ssh connection