I am a newbie to Kubernetes. Recently, I am asked to set up Scylla on AWS. I followed the tutorial to deploy Scylla on EKS (http://operator.docs.scylladb.com/master/eks.html). Everything went well.
Then, I followed Accessing the Database section in another related tutorial (http://operator.docs.scylladb.com/master/generic.html).
I was able to run the commands for the first two steps:
kubectl exec -n scylla -it scylla-cluster-us-east-1-us-east-1a-0 -- cqlsh
> DESCRIBE KEYSPACES;
kubectl -n scylla describe service scylla-cluster-client
However, I don't know how to perform the last step, which said:
Pods running inside the Kubernetes cluster can use this Service to connect to Scylla. Here’s an example using the Python Driver:
from cassandra.cluster import Cluster
cluster = Cluster(['scylla-cluster-client.scylla.svc'])
session = cluster.connect()
The script fails to resolve scylla-cluster-client.scylla.svc.
Therefore, I also tried different IPs, but cassandra.cluster.NoHostAvailable error is encountered.
In addition, I found that pip is not installed after connecting to the cluster via
kubectl exec -n scylla -it scylla-cluster-us-east-1-us-east-1a-0 -- /bin/bash
Can anyone help me solve the connection issue using Python driver?
It would be great if you can tell me:
Why scylla-cluster-client.scylla.svc does not work for me?
What is the different between kubectl exec -n ... and Cassandra drivers?
Which IPs should I use? I noticed that there are cluster IPs from Kubernetes, internal IPs from Kubernetes, and public IPs of the EC2 machines from AWS. If public IP is needed, do I need to open the ports (e.g. 9042) on AWS? How to make it more secure?
Thanks in advance.
scylla-cluster-client.scylla.svc is a k8s resolvable DNS address, so only works from pods hosted on the same cluster (and namespace). You can't use it from the outside
kubectl exec runs a command on one of the Scylla pods, so essentially you are running the command on the Scylla node itself and connecting to localhost on that node. In contrast, scylla-cluster-client.scylla.svc is connecting remotely (but within the k8s network)
You don't need to use an IP - use the scylla-cluster-client.scylla.svc DNS name. If you want to use IP addresses you can manually resolve the DNS name or read the IP addresses of the Scylla pods using the k8s API - but there's really no reason to do that.
If you want to connect from outside the cluster you would need a public service or something like that - basically a k8s managed proxy. In theory you could allow the public pods but that's highly inadvisable.
Related
Is it possible, using the docker SDK for Python, to launch a container in a remote machine?
import docker
client = docker.from_env()
client.containers.run("bfirsh/reticulate-splines", detach=True)
# I'd like to run this container ^^^ in a machine that I have ssh access to.
Going through the documentation it seems like this type of management is out of scope for said SDK, so searching online I got hints that the kubernetes client for Python could be of help, but don't know where to begin.
It's not clearly documented by Docker SDK for Python, but you can use SSH to connect to Docker daemon by specifying host with ssh://[user]#[host] format, for example:
import docker
# Create a client connecting to Docker daemon via SSH
client = docker.DockerClient(base_url="ssh://username#your_host")
It's also possible to set environment variable DOCKER_HOST=ssh://username#your_host and use the example your provided which use current environment to create client, for example:
import docker
import os
os.environ["DOCKER_HOST"] = "ssh://username#your_host"
# Or use `export DOCKER_HOST=ssh://username#your_host` before running Python
client = docker.from_env()
Note: as specified in the question, this is considering you have SSH access to target host. You can test with
ssh username#your_host
It's possible, simply do this:
client = docker.DockerClient(base_url=your_remote_docker_url)
Here's the document I found related to this:
https://docker-py.readthedocs.io/en/stable/client.html#client-reference
If you only have SSH access to it, there is an use_ssh_client option
If you have a k8s cluster, you do not need the Python sdk. You only need the cmd line tool kubectl.
Once you have it installed, you can create a deployment that will deploy your image.
I am trying to figureout from where to get the hostname of a running docker container that was started using docker-py.
Based on presence of DOCKER_HOST= file the started docker container my be on a remove machine and not on the localhost (machine running docker-py code).
I looked inside the container object and I was not able to find any information that would be of use for as 'HostIp': '0.0.0.0' is the remote docker host.
I need an IP or DNS name of the remote machine.
I know that I could start parsing DOCKER_HOST myself and "guess" that but this would not really be a reliable way of doing it, especially as there are multiple protocols involved: ssh:// and tcp:// at least.
I guess it should be an API based way of getting this information.
PS. We would assume that the docker host does not have firewall.
For the moment I ended up creating a bug on https://github.com/docker/docker-py/issues/2254 as I failed to find that information with the library.
The best method is probably to use a website like wtfismyip.com.
You can use
curl wtfismyip.com
to print it in terminal, and can then extract the public ip from the output.
I am trying to run the Flask mega-tutorial app on Azure off Docker. The Dockerfile is as given here, first I tried EXPOSE 5000 (as mentioned in this Dockerfile ) but as that lead to ERR_CONNECTION_TIMED_OUT I then tried EXPOSE 80 as suggested here: but the error remained.
Both ports 5000 and 80 in the Dockerfile worked fine off local server. Also, in each case, for Azure the instanceView.state=="Running" but pinging the ip address does not return anything.
The Azure-Docker helloWorld image also runs fine and my Azure CLI commands are exactly the same as in this example except for changing the container registry name etc. Apart from CLI, I tried doing it on the Azure portal as well with same outcome.
Thanks
When there is no issue with your image and it can work fine locally. It should be the port issue if you use the Azure Container Instance.
Azure Container Instances does not currently support port mapping like
with regular docker configuration
It means that if you expose the port 5000 in the container and you should expose the same port in Azure Container Instance group. For more details, see IPs may not be accessible due to mismatched ports. Also, maybe it's better to use the port 80. Hope this will help you. If there is more question you can give me the message.
Test with your application gives in your GitHub. Here is the screenshot of the result:
I have a compute engine instance running on Google cloud platform.
I would like to use the Python interpreter of the compute engine as a remote interpreter with Pycharm. This means that I would be using Pycharm on my local machine and running computations remotely.
Any clue on how to achieve this?
The following requires, as James Hirschhorn pointed out, the Professional verison of PyCharm.
Assign a public IP to the remote machine on GCP.
Run gcloud compute config-ssh to automatically add the VMs of your project to your ~/.ssh/config or manually add the public IP of your VM to it. If you skipped step 1. then you have to run gcloud compute config-ssh every time you re-start the remote VM, because it always gets a new IP assigned. The ~/.ssh/config gets populated with many entries in the following format:
Host the-vm-host-name # use this in PyCharm's Host field
HostName 123.456.789.00 # the VM's IP address
Use the Host name of the remote you want to connect in your Deployment configuration in PyCharm
Add a remote interpreter: select the remote server from the drop-down (the one previously created) and point PyCharm to the executable python of your Python installation.
Done
My understanding is that you need the Pycharm Ultimate Edition to support remote servers. If you have Ultimate, then you can follow these instructions.
It's fairly easy to accomplish.
You need:
PyCharm Pro
Create and format SSH keys
Config your Compute Engine instance with the SSH keys
Configure PyCharm
You can follow this tutorial that I wrote.
i'm trying to connect pycharm to docker remote interpreter, but the container is part of a cluster (specifically using AWS ECS cluster)
i can not access the container directly i have to go through the bastion machine (the instance that is running the container do not have public IP)
i.e to access the container i need to access it via bastion machine
i had an idea of using ssh tunneling but i could not figure out how to do so using pycharm docker utility
is there any solution that pycharm suggests for that?