How to spawn a docker container in a remote machine - python

Is it possible, using the docker SDK for Python, to launch a container in a remote machine?
import docker
client = docker.from_env()
client.containers.run("bfirsh/reticulate-splines", detach=True)
# I'd like to run this container ^^^ in a machine that I have ssh access to.
Going through the documentation it seems like this type of management is out of scope for said SDK, so searching online I got hints that the kubernetes client for Python could be of help, but don't know where to begin.

It's not clearly documented by Docker SDK for Python, but you can use SSH to connect to Docker daemon by specifying host with ssh://[user]#[host] format, for example:
import docker
# Create a client connecting to Docker daemon via SSH
client = docker.DockerClient(base_url="ssh://username#your_host")
It's also possible to set environment variable DOCKER_HOST=ssh://username#your_host and use the example your provided which use current environment to create client, for example:
import docker
import os
os.environ["DOCKER_HOST"] = "ssh://username#your_host"
# Or use `export DOCKER_HOST=ssh://username#your_host` before running Python
client = docker.from_env()
Note: as specified in the question, this is considering you have SSH access to target host. You can test with
ssh username#your_host

It's possible, simply do this:
client = docker.DockerClient(base_url=your_remote_docker_url)
Here's the document I found related to this:
https://docker-py.readthedocs.io/en/stable/client.html#client-reference
If you only have SSH access to it, there is an use_ssh_client option

If you have a k8s cluster, you do not need the Python sdk. You only need the cmd line tool kubectl.
Once you have it installed, you can create a deployment that will deploy your image.

Related

Run python commands in a docker container from python script on host

I have a docker image and and associated container than runs a jupyter-lab server. On this docker image I have a very specific python module that cannot be installed on the host. On my host, I have all my work environment that I don't want to run on the docker container.
I would like to use that module from python script running on the host. My first idea is to use docker-py (https://github.com/docker/docker-py) on the host like this:
import docker
client = docker.from_env()
container = client.container.run("myImage", detach=True)
container.exec_run("python -c 'import mymodule; # do stuff; print(something)'")
and get the output and keep working in my script.
Is there a better solution? Is there a way to connect to the jupyter server within the script on the host for example?
Thanks
First. As #dagnic states on his comment there are those 2 modules that let you execute docker runtime in you python script (there's probably more, another different question would be "which one is best").
Second. Without knowing anything about Jupiter, but since you call it "server" , it would mean to me that you are able to port mapping that server (remember -p 8080:80 or --publish 8080:80, yeah that's it!). after setting a port mapping for your container you would be able to ie use pycurl module an "talk" to that service.
Remember, if you "talk on a port" to your server, you might also want to do this using i.e with docker-py.
Since you asked if any better solution exists: This two method would be the more popular. First one would convenient for your script, second would launch a server and you can use pycurl from your host script as you asked (connect to the jupyter server) .ie if you launch jupyter server like:
docker run -p 9999:8888 -it -e JUPYTER_ENABLE_LAB=yes jupyter/base-notebook:latest
you can pycurl like:
import pycurl
from io import BytesIO
b_obj = BytesIO()
crl = pycurl.Curl()
# Set URL value
crl.setopt(crl.URL, 'https://localhost:8888')
# Write bytes that are utf-8 encoded
crl.setopt(crl.WRITEDATA, b_obj)
# Perform a file transfer
crl.perform()
# End curl session
crl.close()
# Get the content stored in the BytesIO object (in byte characters)
get_body = b_obj.getvalue()
# Decode the bytes stored in get_body to HTML and print the result
print('Output of GET request:\n%s' % get_body.decode('utf8'))
Update:
So you have two questions:
1. Is there a better solution?
Basically using docker-py module and run jupyter server in a docker container (and a few other options not involving docker I suppose)
2. Is there a way to connect to the jupyter server within the script on the host for example?
Here, there is an example how to run jupyter in docker.
enter link description here
The rest is use pycurl from your code to talk to that jupyther server from your host computer.

Access Scylla on EKS with Python Driver

I am a newbie to Kubernetes. Recently, I am asked to set up Scylla on AWS. I followed the tutorial to deploy Scylla on EKS (http://operator.docs.scylladb.com/master/eks.html). Everything went well.
Then, I followed Accessing the Database section in another related tutorial (http://operator.docs.scylladb.com/master/generic.html).
I was able to run the commands for the first two steps:
kubectl exec -n scylla -it scylla-cluster-us-east-1-us-east-1a-0 -- cqlsh
> DESCRIBE KEYSPACES;
kubectl -n scylla describe service scylla-cluster-client
However, I don't know how to perform the last step, which said:
Pods running inside the Kubernetes cluster can use this Service to connect to Scylla. Here’s an example using the Python Driver:
from cassandra.cluster import Cluster
cluster = Cluster(['scylla-cluster-client.scylla.svc'])
session = cluster.connect()
The script fails to resolve scylla-cluster-client.scylla.svc.
Therefore, I also tried different IPs, but cassandra.cluster.NoHostAvailable error is encountered.
In addition, I found that pip is not installed after connecting to the cluster via
kubectl exec -n scylla -it scylla-cluster-us-east-1-us-east-1a-0 -- /bin/bash
Can anyone help me solve the connection issue using Python driver?
It would be great if you can tell me:
Why scylla-cluster-client.scylla.svc does not work for me?
What is the different between kubectl exec -n ... and Cassandra drivers?
Which IPs should I use? I noticed that there are cluster IPs from Kubernetes, internal IPs from Kubernetes, and public IPs of the EC2 machines from AWS. If public IP is needed, do I need to open the ports (e.g. 9042) on AWS? How to make it more secure?
Thanks in advance.
scylla-cluster-client.scylla.svc is a k8s resolvable DNS address, so only works from pods hosted on the same cluster (and namespace). You can't use it from the outside
kubectl exec runs a command on one of the Scylla pods, so essentially you are running the command on the Scylla node itself and connecting to localhost on that node. In contrast, scylla-cluster-client.scylla.svc is connecting remotely (but within the k8s network)
You don't need to use an IP - use the scylla-cluster-client.scylla.svc DNS name. If you want to use IP addresses you can manually resolve the DNS name or read the IP addresses of the Scylla pods using the k8s API - but there's really no reason to do that.
If you want to connect from outside the cluster you would need a public service or something like that - basically a k8s managed proxy. In theory you could allow the public pods but that's highly inadvisable.

How to access a server(which is running in a docker container) from other machine?

I'm new to docker. I have deployed a python server in a docker container. And I'm able to access using my python application from my machine using virtual machine IP(192.168.99.100).
Ex: http://192.168.99.100:5000
How do I access my the application from the other machine which is in the same network?
I tried giving my machine IP but didn't work.
I run the application using "docker run -p 5000:80 myPythonApp"
An easy way out would be to forward port from your host machine to the virtual machine.
Solution may differ w.r.t the VM providers & Host OS that you use. Like for vagrant you can do something as below -
https://www.vagrantup.com/docs/networking/forwarded_ports.html

docker orchestration - connect to remote interpreter using pycharm

i'm trying to connect pycharm to docker remote interpreter, but the container is part of a cluster (specifically using AWS ECS cluster)
i can not access the container directly i have to go through the bastion machine (the instance that is running the container do not have public IP)
i.e to access the container i need to access it via bastion machine
i had an idea of using ssh tunneling but i could not figure out how to do so using pycharm docker utility
is there any solution that pycharm suggests for that?

Fabric: executing inline Python on target host?

This Fabric command works just fine for fab local grab_from_s3:bucket=...:
def grab_from_s3(bucket, path, localfile):
s3_connection = S3Connection()
s3_bucket = s3_connection.get_bucket(bucket)
s3_key = Key(s3_bucket)
s3_key.key = path
s3_key.get_contents_to_filename(localfile)
Of course, if I feed it a remote target host, it downloads to the local host and not the remote. (i.e. fab staging grab_from_s3:bucket=...).
I'm hoping one of these possibilities exists:
That task can be automatically run on the remote box with a minimum of coding fuss, or
I can programmatically detect that the target host isn't local, and specify a fabric command line for the remote host.
I'd vastly prefer #1, but it's not clear if that's even possible. What's not clear about #2 is whether there are existing Fabric facilities that makes this easy (i.e. detecting the local/remote hosts and the fact that they're different).
What should I do?
Fabric doesn't support running arbitrary python code on a remote host. Fabric mostly runs by invoking shell commands over SSH (for remote machines). The remote machine doesn't even need python installed for Fabric to work.
The execnet project allows you to run python code over the network like you're imagining, you can run the same code without modifications locally and remotely.
The simplest solution might be to have the same fabric code deployed to both the local host and the remote host, then have the local task run the S3 download task remotely (via a shell command).
One problem you will run into (both with execnet and running the fab task on the remote machine) is that you'll need boto installed on the remote machine. Not sure if you have that setup already.
Another option would be to run a command line tool like s3cmd or the official AWS command line client via fabric on the remote host.
Use Fabric on your local host to run Fabric on the remote host, which will allow you to execute Python commands on the remote host. Since Fabric configurations can be very complex, the following is a snippet that will not necessarily work in your environment without modification. For example, the command to run Fabric may be different on various platforms.
Define a function in your fabfile to execute the desired Python code:
def local_grab_from_s3():
bucket = "my_bucket"
path = "my_path"
localfile = "local_filename"
s3_connection = S3Connection()
s3_bucket = s3_connection.get_bucket(bucket)
s3_key = Key(s3_bucket)
s3_key.key = path
s3_key.get_contents_to_filename(localfile)
Define another function call the first function on the remote host:
def grab_from_s3():
cmd("fab local_grab_from_s3")
On the command line of the local host, run fab -H remote_host grab_from_s3

Categories