python app to call kubernetes to create pod programmatically - python

I am designing a web application where users can have trade bots running. So they will sign in, pay for membership then they will create a bot, enter the credentials and start the bot. The user can stop / start the trade bot.
I am trying to do this using kubernetes, so I will have everything running on kubernetes. I will create a namespace named bots and all bots for all clients will be running inside this bot namespace.
Stack is : python (django framework ) + mysql + aws + kubernetes
question: Is there a way to programmatically create a pod using python ? I want to integrate with the application code. So when user clicks on create new bot it will start a new pod running with all the parameters for the specific user.
Basically each pod will be a tenant. But a tenant can have multiple pods / bots.
So how do that ? Is there any kubernetes python lib that does it ? I did some online search but didn't find anything.
Thanks

As noted by Harsh Manvar, you can user the official Kubernetes Python client. Here is a short function which allows to do it.
from kubernetes import client, config, utils
from kubernetes.client.api import core_v1_api
config.load_incluster_config()
try:
c = Configuration().get_default_copy()
except AttributeError:
c = Configuration()
c.assert_hostname = False
Configuration.set_default(c)
self.core_v1 = core_v1_api.CoreV1Api()
def open_pod(self, cmd: list,
pod_name: str,
namespace: str='bots',
image: str=f'{repository}:{tag}',
restartPolicy: str='Never',
serviceAccountName: str='bots-service-account'):
'''
This method launches a pod in kubernetes cluster according to command
'''
api_response = None
try:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print("Unknown error: %s" % e)
exit(1)
if not api_response:
print(f'From {os.path.basename(__file__)}: Pod {pod_name} does not exist. Creating it...')
# Create pod manifest
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'labels': {
'bot': current-bot
},
'name': pod_name
},
'spec': {
'containers': [{
'image': image,
'pod-running-timeout': '5m0s',
'name': f'container',
'args': cmd,
'env': [
{'name': 'env_variable', 'value': env_value},
]
}],
# 'imagePullSecrets': client.V1LocalObjectReference(name='regcred'), # together with a service-account, allows to access private repository docker image
'restartPolicy': restartPolicy,
'serviceAccountName': bots-service-account
}
}
print(f'POD MANIFEST:\n{pod_manifest}')
api_response = self.core_v1.create_namespaced_pod(body=pod_manifest, namespace=namespace)
while True:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
if api_response.status.phase != 'Pending':
break
time.sleep(0.01)
print(f'From {os.path.basename(__file__)}: Pod {pod_name} in {namespace} created.')
return pod_name
For further investigation, refer to the examples in the official github repo: https://github.com/kubernetes-client/python/tree/master/examples

you can use the official Python Kubernetes client to create and manage the POD across the cluster programmatically.
https://github.com/kubernetes-client/python
You can keep one YAML file and replace the values into as per requirement like Deployment Name, Ports and apply the files to the cluster it will create the POD with base image.

Related

How can I read a Jupyter notebook token from the k8s Python API?

I am creating a k8s deployment, service, and ingress using the k8s Python API. The deployment uses the minimal-notebook container to create a Jupyter notebook instance.
After creating the deployment, how can I read the token for my minimal-notebook pod using the k8s Python API?
You would need to get the pod logs, and extract the token.
Given that the pod is already running
k get pods
NAME READY STATUS RESTARTS AGE
mininote 1/1 Running 0 17m
k get pod mininote -o json | jq '.spec.containers[].image'
"jupyter/minimal-notebook"
you could do this:
[my pod's name is mininote and it is running in the default namespace]
import re
from kubernetes.client.rest import ApiException
from kubernetes import client, config
config.load_kube_config()
pod_name = "mininote"
namespace_name = "default"
try:
api = client.CoreV1Api()
response = api.read_namespaced_pod_log(name=pod_name, namespace=namespace_name)
match = re.search(r'token=([0-9a-z]*)', response)
print(match.group(1))
except ApiException as e:
print('Found exception in reading the logs:')
print(e)
running:
> python main.py
174c891b5db325b2aec283df90525c68ab02b02e3a565da5

Can I take a volume snapshot with the k8s python client?

In python k8s client, i use below code
yaml file
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
namespace: test
spec:
volumeSnapshotClassName: snapshotclass
source:
persistentVolumeClaimName: test-pvc
python code
res = utils.create_from_dict(k8s_client, yaml_file)
However, i got this message:
AttributeError: module 'kubernetes.client' has no attribute 'SnapshotStorageV1Api'
I want to take a volumesnapshot in k8s.
How can i do that?
Please give me some advices!
As I pointed in part of the comment I made under the question:
Have you seen this github issue comment: github.com/kubernetes-client/python/issues/…?
The link posted in the comments is a github issue for:
VolumeSnapshot,
VolumeSnapshotClass,
VolumeSnapshotContent
support in the Kubernetes python client.
Citing the comment made in this github issue by user #roycaihw that explains a way how you can make a snapshot:
It looks like those APIs are CRDs: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/#the-volumesnapshotclass-resource
If that's the case, you could use the CustomObject API to send requests to those APIs once they are installed. Example: https://github.com/kubernetes-client/python/blob/master/examples/custom_object.py
-- Github.com: Kubernetes client: Python: Issues: 1995: Issue comment: 2
Example
An example of a Python code that would make a VolumeSnapshot is following:
from kubernetes import client, config
def main():
config.load_kube_config()
api = client.CustomObjectsApi()
# it's my custom resource defined as Dict
my_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": "python-snapshot"},
"spec": {
"volumeSnapshotClassName": "example-snapshot-class",
"source": {"persistentVolumeClaimName": "example-pvc"}
}
}
# create the resource
api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace="default",
plural="volumesnapshots",
body=my_resource,
)
if __name__ == "__main__":
main()
Please change the values inside of this code to support your particular setup (i.e. apiVersion, .metadata.name, .metadata.namespace, etc.).
A side note!
This Python code was tested with GKE and it's gce-pd-csi-driver.
After running this code the VolumeSnapshot should be created:
$ kubectl get volumesnapshots
NAME AGE
python-snapshot 19m
$ kubectl get volumesnapshotcontents
NAME AGE
snapcontent-71380980-6d91-45dc-ab13-4b9f42f7e7f2 19m
Additional resources:
Github.com: Kubernetes client: Python
Kubernetes.io: Docs: Concepts: Storage: Volume snapshots

Startup Script in Metadata Not Running (Python, Google Compute Engine, Cloud Storage Trigger)

I have an app running on Google App Engine, and an AI running on Google Compute Engine. I'm triggering the VM instance to start on a change in a Google Cloud Storage bucket, and have a start-up script that I attempt to store in the metadata of the GCE instance. My cloud functions looks like this:
import os
from googleapiclient.discovery import build
def start(event, context):
file = event
print(file["id"])
string = file["id"]
new_string = string.split('/')
user_id = new_string[1]
payment_id = new_string[2]
name = new_string[3]
print(name)
if name == "uploadcomplete.txt":
startup_script = """ #! /bin/bash
sudo su username
cd directory/directory
python analysis.py -- gs://location/{userId}/{paymentId}
""".format(userId=user_id, paymentId=payment_id)
# initialize compute api
service = build('compute', 'v1', cache_discovery=False)
print('VM Instance starting')
project = 'zephyrd'
zone = 'us-east1-c'
instance = 'zephyr-a'
# get metadata fingerprint in order to set new metadata
metadata = service.instances().get(project=project, zone=zone, instance=instance)
metares = metadata.execute()
fingerprint = metares["metadata"]["fingerprint"]
# set new metadata
bodydata = {"fingerprint": fingerprint,
"items": [{"key": "startup-script", "value": startup_script}]}
meta = service.instances().setMetadata(project=project, zone=zone, instance=instance,
body=bodydata).execute()
print(meta)
# confirm new metdata
instanceget = service.instances().get(project=project, zone=zone, instance=instance).execute()
print("'New Metadata:", instanceget['metadata'])
print(instanceget)
# start VM
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
print('VM Instance started')
print(response)
The VM starts, but the startup script does not run. The script has been simplified for the purposes of the question, but this is just a basic command I'm trying to run. I would add the script directly to the metadata in the console, but I use values from the cloud function trigger to run commands in the VM. What am I missing?
I've attempted to set the metadata in two ways:
"items": [{"key": "startup-script", "value": startup_script}]
as well as:
"items": [{"startup-script" : startup_script}]
Neither work. The commands run beautifully if I manually type them in the shell.
Look into your logs to determine why its not executing.
https://cloud.google.com/compute/docs/startupscript#viewing_startup_script_logs
Probably you the issue is that you are trying to execute a python script instead of a bash script.
Your startup script should be something like:
#! /bin/bash
# ...
python3 paht/to/python_script.py

not able to protect the python flask rest api service using keycloak

I have keycloak server running in docker (192.168.99.100:8080) and python flask-oidc flask application running locally ( localhost:5000) i am not able to access the protected Rest Api even after getting the access_token. has anyone tried this code. if so please help me regarding this. thank you
this is my keycloak client using docker jboss/keycloak image
this is my newuser under the new realm
below is my flask-application
app.py
from flask import Flask, g
from flask_oidc import OpenIDConnect
import requests
secret_key = os.urandom(24).hex()
print(secret_key)
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
app.config["OIDC_CLIENT_SECRETS"]="client_secrets.json"
app.config["OIDC_COOKIE_SECURE"]=False
app.config["OIDC_SCOPES"]=["openid","email","profile"]
app.config["SECRET_KEY"]=secret_key
app.config["TESTING"]=True
app.config["DEBUG"] = True
app.config["OIDC_ID_TOKEN_COOKIE_SECURE"]=False
app.config["OIDC_REQUIRED_VERIFIED_EMAIL"]=False
app.config["OIDC_INTROSPECTION_AUTH_METHOD"]='client_secret_post'
app.config["OIDC_USER_INFO_ENABLED"]=True
oidc = OpenIDConnect(app)
#app.route('/')
def hello_world():
if oidc.user_loggedin:
return ('Hello, %s, See private '
'Log out') % \
oidc.user_getfield('preferred_username')
else:
return 'Welcome anonymous, Log in'
client_secrets.json
{
"web": {
"issuer": "http://192.168.99.100:8080/auth/realms/kariga",
"auth_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/auth",
"client_id": "flask-app",
"client_secret": "eb11741d-3cb5-4457-8ff5-0202c6d6b250",
"redirect_uris": [
"http://localhost:5000/"
],
"userinfo_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/userinfo",
"token_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/token",
"token_introspection_uri": "http://192.168.99.100:8080/auth/realms/kariga/protocol/openid-connect/token/introspect"
}
}
when i launch the flask-app in web browser
i click on the Log in link
next it prompts for the user details (user created under my new realm)
it takes a couple of seconds then it redirects me to an error page
http://localhost:5000/oidc_callback?state=eyJjc3JmX3Rva2VuIjogIkZZbEpqb3ZHblZoUkhEbmJsdXhEVW
that says
httplib2.socks.HTTPError
httplib2.socks.HTTPError: (504, b'Gateway Timeout')
and also it is redirecting to /oidc_callback which is not mentioned anywhere
any help would be appreciated
the problem is occuring because keycloak server which is running
in docker(192.168.99.100)
is not able to hit the flask application server which is running locally(localhost)
better to run both as services in docker by creating a docker-compose file

elasticache redis - python - connection times out

I have a rather simple test app:
import redis
import os
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def test_redis(event, context):
redis_endpoint = None
if "REDIS" in os.environ:
redis_endpoint = os.environ["REDIS"]
log.debug("redis: " + redis_endpoint)
else:
log.debug("cannot read REDIS config environment variable")
return {
'statusCode': 500
}
redis_conn = None
try:
redis_conn = redis.StrictRedis(host=redis_endpoint, port=6379, db=0)
redis_conn.set("foo", "boo")
redis_conn.get("foo")
except:
log.debug("failed to connect to redis")
return {
'statusCode': 500
}
finally:
del redis_conn
return {
'statusCode': 200
}
which I have deployed as a HTTP endpoint with serverless
#
# For full config options, check the docs:
# docs.serverless.com
#
service: XXX
plugins:
- serverless-aws-documentation
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
provider:
name: aws
stage: dev
region: eu-central-1
runtime: python3.6
environment:
# our cache
REDIS: xx-xx-redis-001.xxx.euc1.cache.amazonaws.com
functions:
hello:
handler: hello/hello_world.say_hello
events:
- http:
path: hello
method: get
# private: true # <-- Requires clients to add API keys values in the `x-api-key` header of their request
# authorizer: # <-- An AWS API Gateway custom authorizer function
testRedis:
handler: test_redis/test_redis.test_redis
events:
- http:
path: test-redis
method: get
When I trigger the endpoint via API Gateway, the lambda just times out after about 7 seconds.
The environmental variable is read properly, no error message displayed.
I suppose there's a problem connecting to the redis, but the tutorial are quite explicit - not sure what the problem could be.
The problem might need the need to set up a NAT, not sure how to accomplish this task with serverless
I ran into this issue as well. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
Try having the lambda in the same VPC and security group as your elastic cluster

Categories