Context
I have an application which uses a service running in my kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
...
spec:
ports:
- port: 5672
...
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
rabbitmq ClusterIP 10.105.0.215 <none> 5672/TCP 25h
That application, has as well a client (Python) which at some point needs to connect to that service (for example, using pika). Of course, the client is running outside the cluster, but in a machine with a kubectl configuration.
I would like to design the code of the "client" module as if it would be inside the cluster (or similar):
host = 'rabbitmq'
port = 5672
class AMQPClient(object):
def __init__(self):
"""Creates a connection with a AMQP broker"""
self.parameters = pika.ConnectionParameters(host=host, port=port)
self.connection = pika.BlockingConnection(self.parameters)
self.channel = self.connection.channel()
Issue
when I run the code I get the following error:
$ ./client_fun some_arguments
2020-09-18 09:36:31,137 - ERROR - Address resolution failed: gaierror(-3, 'Temporary failure in name resolution')
Of course, as "rabbitmq" is not in my network but in the k8-cluster network.
However, as kubernetes python client uses a proxy interface, according to this manually-constructing-apiserver-proxy-urls it should be possible to access to the service using an url similar to this:
host = 'https://xxx.xxx.xxx.xxx:6443/api/v1/namespaces/default/services/rabbitmq/proxy'
Which is not working, so something else is missing.
In theory, when using kubectl, the cluster is accessed. So, maybe, there is an easy way that my application can access rabbitmq service without using nodeport.
Note the following:
The service does not necessarily use HTTP/HTTPS protocol
The IP of the cluster might be different, so the proxy-utl cannot be hardcoded. A kubernetes python client function should be use to get the IP and port. Similar to kubectl cluster-info, see at manually-constructing-apiserver-proxy-urls
Port-forwarding to internal service might be a perfect solution, see forward-a-local-port-to-a-port-on-the-pod
If you want to use python client you need a python package: https://github.com/kubernetes-client/python
In this package, you can find how to connect with k8s.
If you want to use k8s self API, you need the k8s token. K8s API doc is useful, also you can find kubectl detail by use -v 9 like: kubectl get ns -v 9
I would suggest not to access the rabbitmq through kubernetes API server as a proxy. It introduces a load on the kubernetes API server as well as kubernetes API server becomes a point of failure.
I would say expose rabbitmq using LoadBalancer type service. If you are not in supported cloud(AWS,Azure etc) environment then you could use metlalb as a load balancer implementation.
Related
I'm trying to access Azure EvenHub but my network makes me use proxy and allows connection only over https (port 443)
Based on https://learn.microsoft.com/en-us/python/api/azure-eventhub/azure.eventhub.aio.eventhubproducerclient?view=azure-python
I added proxy configuration and TransportType.AmqpOverWebsocket parametr and my Producer looks like this:
async def run():
producer = EventHubProducerClient.from_connection_string(
"Endpoint=sb://my_eh.servicebus.windows.net/;SharedAccessKeyName=eh-sender;SharedAccessKey=MFGf5MX6Mdummykey=",
eventhub_name="my_eh",
auth_timeout=180,
http_proxy=HTTP_PROXY,
transport_type=TransportType.AmqpOverWebsocket,
)
and I get an error:
File "/usr/local/lib64/python3.9/site-packages/uamqp/authentication/cbs_auth_async.py", line 74, in create_authenticator_async
raise errors.AMQPConnectionError(
uamqp.errors.AMQPConnectionError: Unable to open authentication session on connection b'EHProducer-a1cc5f12-96a1-4c29-ae54-70aafacd3097'.
Please confirm target hostname exists: b'my_eh.servicebus.windows.net'
I don't know what might be the issue.
Might it be related to this one ? https://github.com/Azure/azure-event-hubs-c/issues/50#issuecomment-501437753
you should be able to set up a proxy that the SDK uses to access EventHub. Here is a sample that shows you how to set the HTTP_PROXY dictionary with the proxy information. Behind the scenes when proxy is passed in, it automatically goes over websockets.
As #BrunoLucasAzure suggested checking the ports on the proxy itself will be good to check, because based on the error message it looks like it made it past the proxy and cant resolve the endpoint.
I created an aplication but Its is not linked to my domain
exp: "site.com", "www.site.com", when I access it I get:
I need to make my ElasticBeanstalk application connect to my domain (jamelaumn.com) im the owner
here's my application loadbalancer prints:
currently I have no rules on EB LB
My EC2 LoadBalancer::
Based on the comments and your updates. I see two issues.
SSL certificate is setup for jamelaumn.com. This will not work. It must be setup for *.jamelaumn.com or api.jamelaumn.com. So you have to make new SSL certificate and add it to your ALB.
You have to redirect port 80 (http) to 443 (https) on your load balancer. The process is described in How can I redirect HTTP requests to HTTPS using an Application Load Balancer?
I’m looking to connect to a Milvus database I deployed on Google Kubernetes Engine.
I am running into an error in the last line of the script. I'm running the script locally.
Here's the process I followed to set up the GKE cluster: (https://milvus.io/docs/v2.0.0/gcp.md)
Here is a similar question I'm drawing from
Any thoughts on what I'm missing?
import os
from pymilvus import connections
from kubernetes import client, config
My_Kubernetes_IP = 'XX.XXX.XX.XX'
# Authenticate with GCP credentials
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json')
# load milvus config file and connect to GKE instance
config = client.Configuration(os.path.abspath('milvus/config.yaml'))
config.host = f'https://{My_Kubernetes_IP}:19530'
client.Configuration.set_default(config)
## connect to milvus
milvus_ip = 'xx.xxx.xx.xx'
connections.connect(host=milvus_ip, port= '19530')
Error:
BaseException: <BaseException: (code=2, message=Fail connecting to server on xx.xxx.xx.xx:19530. Timeout)>
If you want to connect to the Milvus in the k8s cluster by ip+port, you may need to forward your local port 19530 to the Milvus service. Use a command like the following:
$ kubectl port-forward service/my-release-milvus 19530
Have you checked where your milvus external IP is?
Follow the instructions by the documentation you should use kubectl get services to check which external IP is allocated for the milvus.
I'm running a simple script that calls into kubernetes via the python client:
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
.
However, it appears unable to get the correct credentials. I can use the kubectl command-line interface, which I've noticed populates my .kube/config file with an access-token and an expiry whenever I make a command (e.g., kubectl get pods).
As long as that token has not expired, my python script runs fine. However, once that token expires it doesn't seem to be able to refresh it, instead failing and telling me to set GOOGLE_APPLICATION_CREDENTIALS. Of course, when I created a service-account with a keyfile and pointed GOOGLE_APPLICATION_CREDENTIALS to that keyfile, it gave me the following error:
RefreshError: ('invalid_scope: Empty or missing scope not allowed.', u'{\n "error" : "invalid_scope",\n "error_description" : "Empty or missing scope not allowed."\n}')
Is there something wrong with my understanding of this client? Appreciate any help with this!
I am using the 3.0.0 release of the kubernetes python library. In case it is helpful, here is my .kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <CERTIFICATE_DATA>
server: <IP_ADDRESS>
name: <cluster_name>
contexts:
- context:
cluster: <cluster_name>
user: <cluster_name>
name: <cluster_name>
users:
- name: <cluster_name>
user:
auth-provider:
config:
access-token: <SOME_ACCESS_TOKEN>
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2017-11-10T03:20:19Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
I'm trying to get a docker service up using the python SDK. My service is a nginx container that's supposed to bind the container 80 port to the machine 80 port. The nginx config has a redirect clause for all http traffic to https.
When launching the service, i'm using the following parameters:
params = {
'endpoint_spec': docker.types.EndpointSpec(ports={80: 80}),
'name': name,
'hostname': name,
'image': NGINX_IMAGE_NAME,
'networks': [NETWORK_NAME],
'mounts': ['/home/ubuntu/.ssh/certs:/certs:ro'],
'log_driver': 'syslog',
'log_driver_options': {'syslog-address': 'udp://localhost:514'},
'constraints': ['node.labels.environment==test']
}
api.services.create(**params)
The behaviour i was expecting was the 80 port exposed on all nodes that have the environment labes set to test. What i got instead is the 80 port open on all nodes in the docker swarm. Is this intended ? Can one restrict the binding to a particular node?
EDIT:
Apparently the docker ingress network is responsible for my dilemma. Any hints on how to disable this particular behavior?