I'm running a simple script that calls into kubernetes via the python client:
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
.
However, it appears unable to get the correct credentials. I can use the kubectl command-line interface, which I've noticed populates my .kube/config file with an access-token and an expiry whenever I make a command (e.g., kubectl get pods).
As long as that token has not expired, my python script runs fine. However, once that token expires it doesn't seem to be able to refresh it, instead failing and telling me to set GOOGLE_APPLICATION_CREDENTIALS. Of course, when I created a service-account with a keyfile and pointed GOOGLE_APPLICATION_CREDENTIALS to that keyfile, it gave me the following error:
RefreshError: ('invalid_scope: Empty or missing scope not allowed.', u'{\n "error" : "invalid_scope",\n "error_description" : "Empty or missing scope not allowed."\n}')
Is there something wrong with my understanding of this client? Appreciate any help with this!
I am using the 3.0.0 release of the kubernetes python library. In case it is helpful, here is my .kube/config:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <CERTIFICATE_DATA>
server: <IP_ADDRESS>
name: <cluster_name>
contexts:
- context:
cluster: <cluster_name>
user: <cluster_name>
name: <cluster_name>
users:
- name: <cluster_name>
user:
auth-provider:
config:
access-token: <SOME_ACCESS_TOKEN>
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: 2017-11-10T03:20:19Z
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
Related
I’m looking to connect to a Milvus database I deployed on Google Kubernetes Engine.
I am running into an error in the last line of the script. I'm running the script locally.
Here's the process I followed to set up the GKE cluster: (https://milvus.io/docs/v2.0.0/gcp.md)
Here is a similar question I'm drawing from
Any thoughts on what I'm missing?
import os
from pymilvus import connections
from kubernetes import client, config
My_Kubernetes_IP = 'XX.XXX.XX.XX'
# Authenticate with GCP credentials
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = os.path.abspath('credentials.json')
# load milvus config file and connect to GKE instance
config = client.Configuration(os.path.abspath('milvus/config.yaml'))
config.host = f'https://{My_Kubernetes_IP}:19530'
client.Configuration.set_default(config)
## connect to milvus
milvus_ip = 'xx.xxx.xx.xx'
connections.connect(host=milvus_ip, port= '19530')
Error:
BaseException: <BaseException: (code=2, message=Fail connecting to server on xx.xxx.xx.xx:19530. Timeout)>
If you want to connect to the Milvus in the k8s cluster by ip+port, you may need to forward your local port 19530 to the Milvus service. Use a command like the following:
$ kubectl port-forward service/my-release-milvus 19530
Have you checked where your milvus external IP is?
Follow the instructions by the documentation you should use kubectl get services to check which external IP is allocated for the milvus.
I have an EKS cluster with a Fargate profile for compute. I've configured the Pod execution role on the Fargate profile with the 2 managed policies.
AmazonEKSFargatePodExecutionRolePolicy
AmazonDynamoDBFullAccess
The code is run as a CronJob, it starts off by pulling a configuration from DynamoDb:
dynamodb = boto3.resource('dynamodb', region_name=region)
table = dynamodb.Table(table_name)
response = table.get_item(
Key = {
'Id': config_id
})
When the code reaches this point it always exceptions out with:
raise NoCredentialsError()
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I know I can pass the AWS credentials straight in when I initialise the boto3 client but I don't want to do that for security reasons.
I had originally tested the code using an EC2 instance in an auto-scaling group for compute instead of Fargate, which worked.
How do I resolve this error?
Following all 3 steps in this guide addressed the issue.
https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
The problem lay with the service account that was executing the code in my pod.
You need to attach a role to the service account itself. In my implementation I created a new service account, in theory I could have a separate service account with separate permissions per pod.
apiVersion: v1
kind: ServiceAccount
metadata:
name: your-custom-service-account
namespace: default
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::12345678910:role/CustomServiceAccountRole
eks.amazonaws.com/sts-regional-endpoints: 'true'
and then make sure that account is the service account associated with the pod.
spec:
serviceAccountName: your-custom-service-account
If you don't specify a service account for your pod then it defaults to the 'default' service account that is present in the cluster.
Context
I have an application which uses a service running in my kubernetes cluster.
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
...
spec:
ports:
- port: 5672
...
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
rabbitmq ClusterIP 10.105.0.215 <none> 5672/TCP 25h
That application, has as well a client (Python) which at some point needs to connect to that service (for example, using pika). Of course, the client is running outside the cluster, but in a machine with a kubectl configuration.
I would like to design the code of the "client" module as if it would be inside the cluster (or similar):
host = 'rabbitmq'
port = 5672
class AMQPClient(object):
def __init__(self):
"""Creates a connection with a AMQP broker"""
self.parameters = pika.ConnectionParameters(host=host, port=port)
self.connection = pika.BlockingConnection(self.parameters)
self.channel = self.connection.channel()
Issue
when I run the code I get the following error:
$ ./client_fun some_arguments
2020-09-18 09:36:31,137 - ERROR - Address resolution failed: gaierror(-3, 'Temporary failure in name resolution')
Of course, as "rabbitmq" is not in my network but in the k8-cluster network.
However, as kubernetes python client uses a proxy interface, according to this manually-constructing-apiserver-proxy-urls it should be possible to access to the service using an url similar to this:
host = 'https://xxx.xxx.xxx.xxx:6443/api/v1/namespaces/default/services/rabbitmq/proxy'
Which is not working, so something else is missing.
In theory, when using kubectl, the cluster is accessed. So, maybe, there is an easy way that my application can access rabbitmq service without using nodeport.
Note the following:
The service does not necessarily use HTTP/HTTPS protocol
The IP of the cluster might be different, so the proxy-utl cannot be hardcoded. A kubernetes python client function should be use to get the IP and port. Similar to kubectl cluster-info, see at manually-constructing-apiserver-proxy-urls
Port-forwarding to internal service might be a perfect solution, see forward-a-local-port-to-a-port-on-the-pod
If you want to use python client you need a python package: https://github.com/kubernetes-client/python
In this package, you can find how to connect with k8s.
If you want to use k8s self API, you need the k8s token. K8s API doc is useful, also you can find kubectl detail by use -v 9 like: kubectl get ns -v 9
I would suggest not to access the rabbitmq through kubernetes API server as a proxy. It introduces a load on the kubernetes API server as well as kubernetes API server becomes a point of failure.
I would say expose rabbitmq using LoadBalancer type service. If you are not in supported cloud(AWS,Azure etc) environment then you could use metlalb as a load balancer implementation.
I am not able to access vault from Azure Container Instance deployed into a private network with system managed identity.
My code works fine if i use a service principal to access vault , by pass the environment variable to the container.
https://learn.microsoft.com/en-us/azure/developer/python/azure-sdk-authenticate?tabs=bash
my code:
import os
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
keyVaultName = 'XXXXXXX'
KVUri = "https://" + keyVaultName + ".vault.azure.net"
credential = DefaultAzureCredential()
client = SecretClient(vault_url=KVUri, credential=credential)
def secretVal(name):
logging.debug("Retriving the secret from vault for %s", name)
val = client.get_secret(name)
return val.value
error
2020-05-21:02:09:37,349 INFO [_universal.py:412] Request URL: 'http://169.254.169.254/metadata/identity/oauth2/token'
2020-05-21:02:09:37,349 INFO [_universal.py:413] Request method: 'GET'
2020-05-21:02:09:37,349 INFO [_universal.py:414] Request headers:
2020-05-21:02:09:37,349 INFO [_universal.py:417] 'Metadata': 'REDACTED'
2020-05-21:02:09:37,349 INFO [_universal.py:417] 'User-Agent': 'azsdk-python-identity/1.3.1 Python/3.8.3 (Linux-4.15.0-1082-azure-x86_64-with-glibc2.2.5)'
2020-05-21:02:09:37,352 DEBUG [connectionpool.py:226] Starting new HTTP connection (1): 169.254.169.254:80
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/azure/identity/_credentials/default.py", line 105, in get_token
return super(DefaultAzureCredential, self).get_token(*scopes, **kwargs)
File "/usr/local/lib/python3.8/site-packages/azure/identity/_credentials/chained.py", line 71, in get_token
raise ClientAuthenticationError(message=error_message)
azure.core.exceptions.ClientAuthenticationError: No credential in this chain provided a token.
Attempted credentials:
EnvironmentCredential: Incomplete environment configuration. See https://aka.ms/python-sdk-identity#environment-variables for expected environment variables
ImdsCredential: IMDS endpoint unavailable
The issue seems to something similar to the below.
https://github.com/Azure/azure-sdk-for-python/issues/8557
i tried pausing my code for the metadata service to be available using the below while creating the instance. But it still doesn't work.
--command-line "/bin/bash -c 'sleep 90; /usr/local/bin/python xxxx.py'"
Unfortunately, the managed identity of the Azure Container Instance does not support when you create it in the virtual network. See the limitations:
You can't use a managed identity in a container group deployed to a
virtual network.
The ACI in the virtual network is a preview version currently. All the limitations are shown here. So when it's in the Vnet, use the service principal to authenticate, it's similar to the Managed identity, just display in different styles.
I want to get log and describe of my pod in kubernetes by python client. In kubernetes cluster we can use
kubectl logs <NAME_OF_POD>
kubectl describe pods <NAME_OF_pod>
But I want these command in python client of kubernetes.what should I do?
You can read the logs of a pod using the following code:
from kubernetes.client.rest import ApiException
from kubernetes import client, config
config.load_kube_config()
pod_name = "counter"
try:
api_instance = client.CoreV1Api()
api_response = api_instance.read_namespaced_pod_log(name=pod_name, namespace='default')
print(api_response)
except ApiException as e:
print('Found exception in reading the logs')
The above code works perfectly fine for getting the logs of pod.
To get the output of kubectl describe pod, all the information provided is in read_namespaced_pod function. It has all the information you require, and you can use that information in whatever way you require. You can edit the above code and use read_namespaced_pod in place of read_namespaced_pod_log to get the info.
As Kubernetes is using REST API, which gives you every possibility that you can call logs and description of objects via python.
Firstly you need to find out your authorization mechanism. You can find it via according your cluster.
kubectl config view --raw
Then you can create api call with this auth method. In below example, I used basic auth for getting pod logs for example.
import json
import requests
from requests.auth import HTTPBasicAuth
user='admin'
password='password'
url='https://cluster-api-url/api/v1/namespaces/default/pods/nginx-ingress-controller-7bbcbdcf7f-dgr57/log'
requests.packages.urllib3.disable_warnings()
resp = requests.get(url, auth=HTTPBasicAuth(user, password), verify=False, json=False)
print(resp.text)
To get the url easily, type command with "--v=8" argument. For example to get url for describe the pod
kubectl describe pod nginx-ingress-controller-7bbcbdcf7f-dgr57 --v=8
and check above part of your real output
I0514 12:31:42.376972 216066 round_trippers.go:383] GET https://cluster-api-url/api/v1/namespaces/default/events?fieldSelector=involvedObject.namespace%3Ddefault%2CinvolvedObject.uid%3D1ad92455-7589-11e9-8dc1-02a3436401b6%2CinvolvedObject.name%3Dnginx-ingress-controller-7bbcbdcf7f-dgr57
I0514 12:31:42.377026 216066 round_trippers.go:390] Request Headers:
I0514 12:31:42.377057 216066 round_trippers.go:393] Accept: application/json, */*
I0514 12:31:42.377074 216066 round_trippers.go:393] Authorization: Basic YWRtaW46elRoYUJoZDBUYm1FbGpzbjRtYXZ2N1hqRWlvRkJlQmo=
I0514 12:31:42.377090 216066 round_trippers.go:393] User-Agent: kubectl/v1.12.0 (linux/amd64) kubernetes/0ed3388
Copy URL from GET https://<URL> part and change with url in your python script, then you go.
Hope it helps
None of these answers return the events for a namespaced pod, which is given by default when running kubectl describe. To get the namespaced events for a given pod, run:
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
out = kube.core_api.list_namespaced_event(namespace, field_selector=f'involvedObject.name={pod_name}')
where namespace is the namespace of interest and pod_name is your pod of interest.
I needed this when spawning a pod and giving the user a reasonable status report for the current condition of the pod, as well as debugging the state of the pod if it fails to progress beyond "Pending".
You need explore https://github.com/kubernetes-client/python Official K8s Python Client.
But I couldn't find any specific docs to your requirement. I think, below link is the start point,
https://github.com/kubernetes-client/python/blob/master/kubernetes/README.md
Try to do dir on the object and see available methods. For example below code from README
from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
for i in ret.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
Do dir(v1) or dir(ret) and see methods/vars, etc.. Or may list* methods gives you details that you see in kubectl describe pod <name>