I would like to be able to access GKE (kubernetes) cluster in GCP from python kubernetes client.
I cant authenticate and connect to my cluster and i dont find the reason.
Here is what i tried so far.
from google.auth import compute_engine
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
def test_gke(request):
project_id = "myproject"
zone = "myzone"
cluster_id = "mycluster"
credentials = compute_engine.Credentials()
cluster_manager_client = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager_client.get_cluster(name=f'projects/{project_id}/locations/{zone}/clusters/{cluster_id}')
configuration = client.Configuration()
configuration.host = f"https://{cluster.endpoint}:443"
configuration.verify_ssl = False
configuration.api_key = {"authorization": "Bearer " + credentials.token}
client.Configuration.set_default(configuration)
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
pods = v1.list_pod_for_all_namespaces(watch=False)
for i in pods.items:
print("%s\t%s\t%s" % (i.status.pod_ip, i.metadata.namespace, i.metadata.name))
I'd like to get the configuration working I have it work where, the code is running off cluster and it produces the kubectl config file for itself. (see update at end)
Original
The first solution assumes (!) you've the cluster configured in your local (~/.kube/config and probably adjusted by KUBE_CONFIG) config.
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client,config
config.load_kube_config()
api_instance = client.CoreV1Api()
resp = api_instance.list_pod_for_all_namespaces()
for i in resp.items:
print(f"{i.status.pod_ip}\t{i.metadata.namespace}\t{i.metadata.name}")
NOTE
Assumes you've run gcloud containers clusters get-credentials to set the ~/.kube/config file for the current cluster (and has a current-context set.
Uses your user credentials in the ~/.kube/config file so no additional credentials are needed.
Update
Okay, I have it working. Here's the code that will generate a kubectl config and connect to the cluster. This code uses Application Default Credentials to provide a Service Account key to the code (usually export GOOGLE_APPLICATION_CREDENTIALS=/path/to/key.json)
import os
import google.auth
import base64
from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client,config
from ruamel import yaml
PROJECT = os.getenv("PROJECT")
ZONE = os.getenv("ZONE")
CLUSTER = os.getenv("CLUSTER")
# Get Application Default Credentials
# `project_id` is the Service Account's
# This may differ to the cluster's `PROJECT`
credentials, project_id = google.auth.default()
# Get the cluster config from GCP
cluster_manager_client = ClusterManagerClient(credentials=credentials)
name=f"projects/{PROJECT}/locations/{ZONE}/clusters/{CLUSTER}"
cluster = cluster_manager_client.get_cluster(name=name)
SERVER = cluster.endpoint
CERT = cluster.master_auth.cluster_ca_certificate
configuration = client.Configuration()
# Create's a `kubectl` config
NAME="freddie" # arbitrary
CONFIG=f"""
apiVersion: v1
kind: Config
clusters:
- name: {NAME}
cluster:
certificate-authority-data: {CERT}
server: https://{SERVER}
contexts:
- name: {NAME}
context:
cluster: {NAME}
user: {NAME}
current-context: {NAME}
users:
- name: {NAME}
user:
auth-provider:
name: gcp
config:
scopes: https://www.googleapis.com/auth/cloud-platform
"""
# The Python SDK doesn't directly support providing a dict
# See: https://github.com/kubernetes-client/python/issues/870
kubeconfig = yaml.safe_load(CONFIG)
loader = config.kube_config.KubeConfigLoader(kubeconfig)
loader.load_and_set(configuration)
api_client= client.ApiClient(configuration)
api_instance = client.CoreV1Api(api_client)
# Enumerate e.g. Pods
resp = api_instance.list_pod_for_all_namespaces()
for i in resp.items:
print(f"{i.status.pod_ip}\t{i.metadata.namespace}\t{i.metadata.name}")
Related
I am creating a k8s deployment, service, and ingress using the k8s Python API. The deployment uses the minimal-notebook container to create a Jupyter notebook instance.
After creating the deployment, how can I read the token for my minimal-notebook pod using the k8s Python API?
You would need to get the pod logs, and extract the token.
Given that the pod is already running
k get pods
NAME READY STATUS RESTARTS AGE
mininote 1/1 Running 0 17m
k get pod mininote -o json | jq '.spec.containers[].image'
"jupyter/minimal-notebook"
you could do this:
[my pod's name is mininote and it is running in the default namespace]
import re
from kubernetes.client.rest import ApiException
from kubernetes import client, config
config.load_kube_config()
pod_name = "mininote"
namespace_name = "default"
try:
api = client.CoreV1Api()
response = api.read_namespaced_pod_log(name=pod_name, namespace=namespace_name)
match = re.search(r'token=([0-9a-z]*)', response)
print(match.group(1))
except ApiException as e:
print('Found exception in reading the logs:')
print(e)
running:
> python main.py
174c891b5db325b2aec283df90525c68ab02b02e3a565da5
I created a lambda function using serverless in a private subnets of the non default VPC. I wanted to restart the app server of elasticbeanstalk application at a schedule time. I used boto3 and here is the reference [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html][1]
The problem is that when i run the function locally it runs and restart the application server. But when i deploy using sls deploy, it is not working and i get null response back when i test it from the lambda console.
Here is the code:
import json
from logging import log
from loguru import logger
import boto3
from datetime import datetime
import pytz
def main(event, context):
try:
client = boto3.client("elasticbeanstalk", region_name="us-west-1")
applications = client.describe_environments()
current_hour = datetime.now(pytz.timezone("US/Eastern")).hour
for env in applications["Environments"]:
applicationname = env["EnvironmentName"]
if applicationname == "xxxxx-xxx":
response = client.restart_app_server(
EnvironmentName=applicationname,
)
logger.info(response)
print("restarted the application")
return {"statusCode": 200, "body": json.dumps("restarted the instance")}
except Exception as e:
logger.exception(e)
if __name__ == "__main__":
main("", "")
Here the serverless.yml file:
service: beanstalk-starter
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
profile: xxxx-admin
region: us-west-1
memorySize: 512
timeout: 15
vpc:
securityGroupIds:
- sg-xxxxxxxxxxx (open on all ports for inbound)
subnetIds:
- subnet-xxxxxxxxxxxxxxxx (private)
- subnet-xxxxxxxxxxxxxxxx (private)
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
functions:
main:
handler: handler.main
events:
- schedule: rate(1 minute)
Response from lambda console:
The area below shows the result returned by your function execution. Learn more about returning results from your function.
null
Any help would be appreciated! Let me know what I'm missing here!
To solve this, I have to give these two permissions to my AWS lambda role from the AWS management console. You can also set the permission in the serverless.yml file.
AWSLambdaVPCAccessExecutionRole
AWSCodePipeline_FullAccess
(*Make sure you are using the least privileges while giving permission to a role.)
Thank you.
I have a python app that needs to connect to mongodb statefulset in kubernetes
I made a chart for my app and mongodb as a dependency using helm.
when the data needs to be written to the DB it send this error code:
pymongo.errors.OperationFailure: Authentication failed., full error: {'operationTime': Timestamp(1629805756, 1), 'ok': 0.0, 'errmsg': 'Authentication failed.', 'code': 18, 'codeName': 'AuthenticationFailed', '$clusterTime': {'clusterTime': Timestamp(1629805756, 1), 'signature': {'hash': b'\xde\x1e\t\x93:H\xd1\x9b\x9b\x15\xacu#S\xe9\x02\xd6#`\xae', 'keyId': 6999961677823213573}}}
The python configs for mongo:
from flask import Flask,request,render_template,jsonify
from pymongo import MongoClient
import os
from dotenv import load_dotenv, find_dotenv
load_dotenv(find_dotenv())
MONGODB_URI = os.getenv('MONGODB_URI')
app = Flask(__name__)
client = MongoClient(MONGODB_URI)
db = client.calocalcdb
collection = db.calocalc_collection
users = db.users
The mongo URI is in a secret file and the application should get it via env variable.
the program fails on this line:
user_id = users.insert_one(user)
When i try to echo $MONGODB_URI from the application pod it returns nothing.
In your depoyment yaml you need something to map the secret to the environment variable; something like:
spec:
template:
spec:
containers:
env:
- name: MONGODB_URI
valueFrom:
secretKeyRef:
name: <secret name>
key: <secret name>
credentials = ServicePrincipalCredentials(client_id=client_id, secret=secret, tenant=tenant)
adf_client = DataFactoryManagementClient(credentials, subscription_id)
run_response = adf_client.pipelines.create_run(rg_name, df_name, pipeline_nm,{})
# Monitor the pipeline run
pipeline_run = adf_client.pipeline_runs.get(rg_name, df_name, run_response.run_id)
while (pipeline_run.status == 'InProgress' or pipeline_run.status == 'Queued'):
#print("[INFO]:Pipeline run status: {}".format(pipeline_run.status))
time.sleep(statuschecktime)
pipeline_run = adf_client.pipeline_runs.get(rg_name, df_name, run_response.run_id)
print("[INFO]:Pipeline run status: {}".format(pipeline_run.status))
print('')
activity_runs_paged = list(adf_client.activity_runs.list_by_pipeline_run(rg_name, df_name, pipeline_run.run_id, datetime.now() - timedelta(1), datetime.now() + timedelta(1)))
Activity run is different from the pipeline run, if you want to fetch the pipelines run details, follow the steps below.
1.Register an application with Azure AD and create a service principal
2.Get tenant and app ID values for signing in and Create a new application secret and save it
3.Navigate to the datafactory in the portal -> Access control (IAM) -> Add role assignment -> add your application as a role e.g. Contributor, details follow this.
4.Install the packages.
pip install azure-mgmt-resource
pip install azure-mgmt-datafactory
5.Then use the code below to query pipeline runs in the factory based on input filter conditions.
from azure.common.credentials import ServicePrincipalCredentials
from azure.mgmt.datafactory import DataFactoryManagementClient
from azure.mgmt.datafactory.models import *
from datetime import datetime, timedelta
subscription_id = "<subscription-id>"
rg_name = "<resource-group-name>"
df_name = "<datafactory-name>"
tenant_id = "<tenant-id>"
client_id = "<application-id (i.e client id)>"
client_secret = "<client-secret>"
credentials = ServicePrincipalCredentials(client_id=client_id, secret=client_secret, tenant=tenant_id)
adf_client = DataFactoryManagementClient(credentials, subscription_id)
filter_params = RunFilterParameters(last_updated_after=datetime.now() - timedelta(1), last_updated_before=datetime.now() + timedelta(1))
pipeline_runs = adf_client.pipeline_runs.query_by_factory(resource_group_name=rg_name, factory_name=df_name, filter_parameters = filter_params)
for pipeline_run in pipeline_runs.value:
print(pipeline_run)
You can also get the specific pipeline run with the Run ID.
specific_pipeline_run = adf_client.pipeline_runs.get(resource_group_name=rg_name,factory_name=df_name,run_id= "xxxxxxxx")
print(specific_pipeline_run)
Is there a simple way to create a configuration object for the Python Kubernetes client by passing a variable containing the YAML of the kubeconfig?
It's fairly easy to do something like:
from kubernetes import client, config, watch
def main():
config.load_kube_config()
or
from kubernetes import client, config, watch
def main():
config.load_incluster_config()
But I will like to create the config based on a variable with the YAML kubeconfig, Let's say I have:
k8s_config = yaml.safe_load('''
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://asdf.asdf:443
name: cluster
contexts:
- context:
cluster: cluster
user: admin
name: admin
current-context: admin
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tVGYUZiL2sxZlRFTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgU0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
''')
And I will like to load it as:
config.KubeConfigLoader(k8s_config)
The reason for this is that I can't store the content of the kubeconfig before loading the config.
The error I'm receiving is: "Error: module 'kubernetes.config' has no attribute 'KubeConfigLoader'"
They don't include KubeConfigLoader in the "pull up" inside config/__init__.py, which is why your kubernetes.config.KubeConfigLoader reference isn't working. You will have to reach into the implementation package and reference the class specifically:
from kubernetes.config.kube_config import KubeConfigLoader
k8s_config = yaml.safe_load('''...''')
config = KubeConfigLoader(
config_dict=k8s_config,
config_base_path=None)
Be aware that unlike most of my answers, I didn't actually run this one, but that's the theory