This Fabric command works just fine for fab local grab_from_s3:bucket=...:
def grab_from_s3(bucket, path, localfile):
s3_connection = S3Connection()
s3_bucket = s3_connection.get_bucket(bucket)
s3_key = Key(s3_bucket)
s3_key.key = path
s3_key.get_contents_to_filename(localfile)
Of course, if I feed it a remote target host, it downloads to the local host and not the remote. (i.e. fab staging grab_from_s3:bucket=...).
I'm hoping one of these possibilities exists:
That task can be automatically run on the remote box with a minimum of coding fuss, or
I can programmatically detect that the target host isn't local, and specify a fabric command line for the remote host.
I'd vastly prefer #1, but it's not clear if that's even possible. What's not clear about #2 is whether there are existing Fabric facilities that makes this easy (i.e. detecting the local/remote hosts and the fact that they're different).
What should I do?
Fabric doesn't support running arbitrary python code on a remote host. Fabric mostly runs by invoking shell commands over SSH (for remote machines). The remote machine doesn't even need python installed for Fabric to work.
The execnet project allows you to run python code over the network like you're imagining, you can run the same code without modifications locally and remotely.
The simplest solution might be to have the same fabric code deployed to both the local host and the remote host, then have the local task run the S3 download task remotely (via a shell command).
One problem you will run into (both with execnet and running the fab task on the remote machine) is that you'll need boto installed on the remote machine. Not sure if you have that setup already.
Another option would be to run a command line tool like s3cmd or the official AWS command line client via fabric on the remote host.
Use Fabric on your local host to run Fabric on the remote host, which will allow you to execute Python commands on the remote host. Since Fabric configurations can be very complex, the following is a snippet that will not necessarily work in your environment without modification. For example, the command to run Fabric may be different on various platforms.
Define a function in your fabfile to execute the desired Python code:
def local_grab_from_s3():
bucket = "my_bucket"
path = "my_path"
localfile = "local_filename"
s3_connection = S3Connection()
s3_bucket = s3_connection.get_bucket(bucket)
s3_key = Key(s3_bucket)
s3_key.key = path
s3_key.get_contents_to_filename(localfile)
Define another function call the first function on the remote host:
def grab_from_s3():
cmd("fab local_grab_from_s3")
On the command line of the local host, run fab -H remote_host grab_from_s3
Related
Is it possible, using the docker SDK for Python, to launch a container in a remote machine?
import docker
client = docker.from_env()
client.containers.run("bfirsh/reticulate-splines", detach=True)
# I'd like to run this container ^^^ in a machine that I have ssh access to.
Going through the documentation it seems like this type of management is out of scope for said SDK, so searching online I got hints that the kubernetes client for Python could be of help, but don't know where to begin.
It's not clearly documented by Docker SDK for Python, but you can use SSH to connect to Docker daemon by specifying host with ssh://[user]#[host] format, for example:
import docker
# Create a client connecting to Docker daemon via SSH
client = docker.DockerClient(base_url="ssh://username#your_host")
It's also possible to set environment variable DOCKER_HOST=ssh://username#your_host and use the example your provided which use current environment to create client, for example:
import docker
import os
os.environ["DOCKER_HOST"] = "ssh://username#your_host"
# Or use `export DOCKER_HOST=ssh://username#your_host` before running Python
client = docker.from_env()
Note: as specified in the question, this is considering you have SSH access to target host. You can test with
ssh username#your_host
It's possible, simply do this:
client = docker.DockerClient(base_url=your_remote_docker_url)
Here's the document I found related to this:
https://docker-py.readthedocs.io/en/stable/client.html#client-reference
If you only have SSH access to it, there is an use_ssh_client option
If you have a k8s cluster, you do not need the Python sdk. You only need the cmd line tool kubectl.
Once you have it installed, you can create a deployment that will deploy your image.
I have a docker image and and associated container than runs a jupyter-lab server. On this docker image I have a very specific python module that cannot be installed on the host. On my host, I have all my work environment that I don't want to run on the docker container.
I would like to use that module from python script running on the host. My first idea is to use docker-py (https://github.com/docker/docker-py) on the host like this:
import docker
client = docker.from_env()
container = client.container.run("myImage", detach=True)
container.exec_run("python -c 'import mymodule; # do stuff; print(something)'")
and get the output and keep working in my script.
Is there a better solution? Is there a way to connect to the jupyter server within the script on the host for example?
Thanks
First. As #dagnic states on his comment there are those 2 modules that let you execute docker runtime in you python script (there's probably more, another different question would be "which one is best").
Second. Without knowing anything about Jupiter, but since you call it "server" , it would mean to me that you are able to port mapping that server (remember -p 8080:80 or --publish 8080:80, yeah that's it!). after setting a port mapping for your container you would be able to ie use pycurl module an "talk" to that service.
Remember, if you "talk on a port" to your server, you might also want to do this using i.e with docker-py.
Since you asked if any better solution exists: This two method would be the more popular. First one would convenient for your script, second would launch a server and you can use pycurl from your host script as you asked (connect to the jupyter server) .ie if you launch jupyter server like:
docker run -p 9999:8888 -it -e JUPYTER_ENABLE_LAB=yes jupyter/base-notebook:latest
you can pycurl like:
import pycurl
from io import BytesIO
b_obj = BytesIO()
crl = pycurl.Curl()
# Set URL value
crl.setopt(crl.URL, 'https://localhost:8888')
# Write bytes that are utf-8 encoded
crl.setopt(crl.WRITEDATA, b_obj)
# Perform a file transfer
crl.perform()
# End curl session
crl.close()
# Get the content stored in the BytesIO object (in byte characters)
get_body = b_obj.getvalue()
# Decode the bytes stored in get_body to HTML and print the result
print('Output of GET request:\n%s' % get_body.decode('utf8'))
Update:
So you have two questions:
1. Is there a better solution?
Basically using docker-py module and run jupyter server in a docker container (and a few other options not involving docker I suppose)
2. Is there a way to connect to the jupyter server within the script on the host for example?
Here, there is an example how to run jupyter in docker.
enter link description here
The rest is use pycurl from your code to talk to that jupyther server from your host computer.
I am using a subprocess to run a program on another machine through a mounted network file system.
For my first step in this project I have been using sshfs mount and wget:
sys_process = subprocess.Popen(wget_exec.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
using the command: wget works perfectly
using the command: /mnt/ssh_mount/wget does not execute
System libraries:
my remote system is: Arch Linux which is calling for libpcre.so.1
my local system is: Ubuntu which uses libpcre3 so libpcre.so.1 is missing
I know this because when I call the wget command through the ssh mount (/mnt/ssh_mount/bin/wget) it throws an error. I do not wish to install needed libraries to all systems using this as it defeats the purpose of trying to run something remotely.
For measure checks for permissions have been made
How do I get the command to use the local libraries?
I hope to use nfs as well which would exclude below as solutions:
Python subprocess - run multiple shell commands over SSH
Paramiko
I have tried (with no success)
os.chdir('/mnt/ssh_mount')
Error while loading shared libraries: 'libpcre.so.0: cannot open shared object file: No such file or directory' assumes a stable mount point which would cause changes in 2 places when the environment would change (this seems wrong from a database normalization background, I would assume code/sys admin as well)
You are not actually running the wget command on the remote machine - you are trying to run the remote machine's binary on your local system, and the command is failing due to incompatible library versions. sshfs, nfs, and other types of network mounting protocols simply mount the remote filesystem as an extension of your local one - they don't allow for remote execution. To do that, you'll need to open a remote shell using ssh tunneling and execute the Arch wget command through that.
Would you please let me know, how would I run my code in local machine to remote server?
I have source code and data in local machine. But I would like to run the code in remote server.
One solution would be:
Install python on the remote machine
Package your code into a python package using distutils (see http://wiki.python.org/moin/Distutils/Tutorial). Basically the process ends when you run the command python setup sdist in the root dir of your project, and get a tar.gz file in the dist/ subfolder.
Copy your package to the remote server using scp, for example, if it is an amazon machine:
scp -i myPemFile.pem local-python-package.tar.gz remote_user_name#remote_ip:remote_folder
Run sudo pip install local-python-package.tar.gz on the remote server
Now you can either SSH to the remote machine and run your code or use some remote enabler such as fabric to start commands on the remote server (works for any shell command, specifically python scripts)
Alternatively, you can just skip the package building in [2], if you have a simple script, just scp the script itself to the remote machine ane proceed using a remote python myscript.py
Hope this helps
I would recommend setting up git repository on repote server and connect local source (for git you can read about how to do it here: http://git-scm.com/book).
Then you can use i.e. Eclipse EGit and after you change your local code you can push it to remote location.
Are there any existing library for implementing a remote command line interface?
e.g.
Consider the case of gitolite, when you do a git push origin, it will ssh into the remote server and execute some code (namely hooks, no server is needed for the whole transaction.
What I want to archive is something like that, e.g.
./remote get_up_time
It will invoke ssh into the remote machine and do execute the script get_up_time already deployed
the ruby standard distribution provides DRb aka Distributed Ruby:
http://www.ruby-doc.org/stdlib-1.9.3/libdoc/drb/rdoc/DRb.html
Implementing your own script would be easy enough.
#!/bin/bash
ssh root#example.com "$1"
Then you could just call like:
./remote.sh get_up_time
Assuming that ssh is installed and that your public key is on the remote server (and thus, you do not need to provide a password), this would be a crude implementation:
#!/usr/bin/env ruby
host = ARGV.shift
command = ARGV.join(' ')
IO.popen("ssh #{host} \"#{command}\"") do |io|
io.sync = true
io.readlines.each { |line| puts line }
end
Usable like:
$ ./remote.rb www.example.com ls -l
You can expand this as needed to provide extra features, such as reading from stdin to provide ssh a password or something.
Although it appears there is "no server", there is certainly an sshd (a remote server) running on the remote system that allows this to work.
If you don't want to use ssh, you will need to use another server running on the remote machine or write your own.