In python k8s client, i use below code
yaml file
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: test-snapshot
namespace: test
spec:
volumeSnapshotClassName: snapshotclass
source:
persistentVolumeClaimName: test-pvc
python code
res = utils.create_from_dict(k8s_client, yaml_file)
However, i got this message:
AttributeError: module 'kubernetes.client' has no attribute 'SnapshotStorageV1Api'
I want to take a volumesnapshot in k8s.
How can i do that?
Please give me some advices!
As I pointed in part of the comment I made under the question:
Have you seen this github issue comment: github.com/kubernetes-client/python/issues/…?
The link posted in the comments is a github issue for:
VolumeSnapshot,
VolumeSnapshotClass,
VolumeSnapshotContent
support in the Kubernetes python client.
Citing the comment made in this github issue by user #roycaihw that explains a way how you can make a snapshot:
It looks like those APIs are CRDs: https://kubernetes.io/docs/concepts/storage/volume-snapshot-classes/#the-volumesnapshotclass-resource
If that's the case, you could use the CustomObject API to send requests to those APIs once they are installed. Example: https://github.com/kubernetes-client/python/blob/master/examples/custom_object.py
-- Github.com: Kubernetes client: Python: Issues: 1995: Issue comment: 2
Example
An example of a Python code that would make a VolumeSnapshot is following:
from kubernetes import client, config
def main():
config.load_kube_config()
api = client.CustomObjectsApi()
# it's my custom resource defined as Dict
my_resource = {
"apiVersion": "snapshot.storage.k8s.io/v1beta1",
"kind": "VolumeSnapshot",
"metadata": {"name": "python-snapshot"},
"spec": {
"volumeSnapshotClassName": "example-snapshot-class",
"source": {"persistentVolumeClaimName": "example-pvc"}
}
}
# create the resource
api.create_namespaced_custom_object(
group="snapshot.storage.k8s.io",
version="v1beta1",
namespace="default",
plural="volumesnapshots",
body=my_resource,
)
if __name__ == "__main__":
main()
Please change the values inside of this code to support your particular setup (i.e. apiVersion, .metadata.name, .metadata.namespace, etc.).
A side note!
This Python code was tested with GKE and it's gce-pd-csi-driver.
After running this code the VolumeSnapshot should be created:
$ kubectl get volumesnapshots
NAME AGE
python-snapshot 19m
$ kubectl get volumesnapshotcontents
NAME AGE
snapcontent-71380980-6d91-45dc-ab13-4b9f42f7e7f2 19m
Additional resources:
Github.com: Kubernetes client: Python
Kubernetes.io: Docs: Concepts: Storage: Volume snapshots
Related
I'm writing an Azure Durable Function, and I would like to write some unit tests for this whole Azure Function.
I tried to trigger the Client function (the "Start" function, as it is often called), but I can't make it work.
I'm doing this for two reasons:
It's frustrating to run the Azure Function code by running "func host start" (or pressing F5), then going to my browser, finding the right tab, going to http://localhost:7071/api/orchestrators/FooOrchestrator and going back to VS Code to debug my code.
I'd like to write some unit tests to ensure the quality of my project's code. Therefore I'm open to suggestions, maybe it would be easier to only test the execution of Activity functions.
Client Function code
This is the code of my Client function, mostly boilerplate code like this one
import logging
import azure.functions as func
import azure.durable_functions as df
async def main(req: func.HttpRequest, starter: str) -> func.HttpResponse:
# 'starter' seems to contains the JSON data about
# the URLs to monitor, stop, etc, the Durable Function
client = df.DurableOrchestrationClient(starter)
# The Client function knows which orchestrator to call
# according to 'function_name'
function_name = req.route_params["functionName"]
# This part fails with a ClientConnectorError
# with the message: "Cannot connect to host 127.0.0.1:17071 ssl:default"
instance_id = await client.start_new(function_name, None, None)
logging.info(f"Orchestration '{function_name}' starter with ID = '{instance_id}'.")
return client.create_check_status_response(req, instance_id)
Unit test try
Then I tried to write some code to trigger this Client function like I did for some "classic" Azure Functions:
import asyncio
import json
if __name__ == "__main__":
# Build a simple request to trigger the Client function
req = func.HttpRequest(
method="GET",
body=None,
url="don't care?",
# What orchestrator do you want to trigger?
route_params={"functionName": "FooOrchestrator"},
)
# I copy pasted the data that I obtained when I ran the Durable Function
# with "func host start"
starter = {
"taskHubName": "TestHubName",
"creationUrls": {
"createNewInstancePostUri": "http://localhost:7071/runtime/webhooks/durabletask/orchestrators/{functionName}[/{instanceId}]?code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"createAndWaitOnNewInstancePostUri": "http://localhost:7071/runtime/webhooks/durabletask/orchestrators/{functionName}[/{instanceId}]?timeout={timeoutInSeconds}&pollingInterval={intervalInSeconds}&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
},
"managementUrls": {
"id": "INSTANCEID",
"statusQueryGetUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID?taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"sendEventPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID/raiseEvent/{eventName}?taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"terminatePostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID/terminate?reason={text}&taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"rewindPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID/rewind?reason={text}&taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"purgeHistoryDeleteUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID?taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"restartPostUri": "http://localhost:7071/runtime/webhooks/durabletask/instances/INSTANCEID/restart?taskHub=TestHubName&connection=Storage&code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
},
"baseUrl": "http://localhost:7071/runtime/webhooks/durabletask",
"requiredQueryStringParameters": "code=aakw1DfReOkYCTFMdKPaA1Q6bSfnHZ/0lzvKsS6MVXCJdp4zhHKDJA==",
"rpcBaseUrl": "http://127.0.0.1:17071/durabletask/",
}
# I need to use async methods because the "main" of the Client
# uses async.
reponse = asyncio.get_event_loop().run_until_complete(
main(req, starter=json.dumps(starter))
)
But unfortunately the Client function still fails in the await client.start_new(function_name, None, None) part.
How could I write some unit tests for my Durable Azure Function in Python?
Technical information
Python version: 3.9
Azure Functions Core Tools version 4.0.3971
Function Runtime Version: 4.0.1.16815
Not sure if this will help which is the official documentation from Microsoft on the Unit testing for what you are looking for - https://github.com/kemurayama/durable-functions-for-python-unittest-sample
How to push my app (using python-flask + redis) to gcr.io and deploy to google kubernetes (by yaml file)?
And I want to set env variable for my app
import os
import redis
from flask import Flask
from flask import request, redirect, render_template, url_for
from flask import Response
app = Flask(__name__)
redis_host = os.environ['REDIS_HOST']
app.redis = redis.StrictRedis(host=redis_host, port=6379, charset="utf-8", decode_responses=True)
# Be super aggressive about saving for the development environment.
# This says save every second if there is at least 1 change. If you use
# redis in production you'll want to read up on the redis persistence
# model.
app.redis.config_set('save', '1 1')
#app.route('/', methods=['GET', 'POST'])
def main_page():
if request.method == 'POST':
app.redis.lpush('entries', request.form['entry'])
return redirect(url_for('main_page'))
else:
entries = app.redis.lrange('entries', 0, -1)
return render_template('main.html', entries=entries)
#Router my app by post and redirect to mainpage
#app.route('/clear', methods=['POST'])
def clear_entries():
app.redis.ltrim('entries', 1, 0)
return redirect(url_for('main_page'))
#use for docker on localhost
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
Posting this answer as a community wiki to set more of a baseline approach to the question rather than to give a specific solution addressing the code included in the question.
Feel free to edit/expand.
This topic could be quite wide considering the fact it could be addressed in many different ways (as described in the question, by using Cloud Build, etc).
Addressing this question specifically on the part of:
Building the image and sending it to GCR.
Using newly built image in GKE.
Building the image and sending it to GCR.
Assuming that your code and your whole Docker image is running correctly, you can build/tag it in a following manner to then send it to GCR:
gcloud auth configure-docker
adds the Docker credHelper entry to Docker's
configuration file, or creates the file if it doesn't exist. This will
register gcloud as the credential helper for all Google-supported
Docker registries.
docker tag YOUR_IMAGE gcr.io/PROJECT_ID/IMAGE_NAME
docker push gcr.io/PROJECT_ID/IMAGE_NAME
After that you can go to the:
GCP Cloud Console (Web UI) -> Container Registry
and see the image you've uploaded.
Using newly built image in GKE
To run earlier mentioned image you can either:
Create the Deployment in the Cloud Console (Kubernetes Engine -> Workloads -> Deploy)
A side note!
You can also add there the environment variables of your choosing (as pointed in the question)
Create it with a YAML manifest that will be similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: amazing-app
labels:
app: amazing-app
spec:
replicas: 3
selector:
matchLabels:
app: amazing-app
template:
metadata:
labels:
app: amazing-app
spec:
containers:
- name: amazing-app
image: gcr.io/PROJECT-ID/IMAGE-NAME # <-- IMPORTANT!
env:
- name: DEMO_GREETING
value: "Hello from the environment"
Please take a specific look on following part:
env:
- name: DEMO_GREETING
value: "Hello from the environment"
This part will create an environment variable inside of each container:
$ kubectl exec -it amazing-app-6db8d7478b-4gtxk -- /bin/bash -c 'echo $DEMO_GREETING'
Hello from the environment
Additional resources:
Cloud.google.com: Container registry: Docs: Pushing and pulling
Cloud.google.com: Build: Docs: Deploying builds: Deploy GKE
I am designing a web application where users can have trade bots running. So they will sign in, pay for membership then they will create a bot, enter the credentials and start the bot. The user can stop / start the trade bot.
I am trying to do this using kubernetes, so I will have everything running on kubernetes. I will create a namespace named bots and all bots for all clients will be running inside this bot namespace.
Stack is : python (django framework ) + mysql + aws + kubernetes
question: Is there a way to programmatically create a pod using python ? I want to integrate with the application code. So when user clicks on create new bot it will start a new pod running with all the parameters for the specific user.
Basically each pod will be a tenant. But a tenant can have multiple pods / bots.
So how do that ? Is there any kubernetes python lib that does it ? I did some online search but didn't find anything.
Thanks
As noted by Harsh Manvar, you can user the official Kubernetes Python client. Here is a short function which allows to do it.
from kubernetes import client, config, utils
from kubernetes.client.api import core_v1_api
config.load_incluster_config()
try:
c = Configuration().get_default_copy()
except AttributeError:
c = Configuration()
c.assert_hostname = False
Configuration.set_default(c)
self.core_v1 = core_v1_api.CoreV1Api()
def open_pod(self, cmd: list,
pod_name: str,
namespace: str='bots',
image: str=f'{repository}:{tag}',
restartPolicy: str='Never',
serviceAccountName: str='bots-service-account'):
'''
This method launches a pod in kubernetes cluster according to command
'''
api_response = None
try:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
except ApiException as e:
if e.status != 404:
print("Unknown error: %s" % e)
exit(1)
if not api_response:
print(f'From {os.path.basename(__file__)}: Pod {pod_name} does not exist. Creating it...')
# Create pod manifest
pod_manifest = {
'apiVersion': 'v1',
'kind': 'Pod',
'metadata': {
'labels': {
'bot': current-bot
},
'name': pod_name
},
'spec': {
'containers': [{
'image': image,
'pod-running-timeout': '5m0s',
'name': f'container',
'args': cmd,
'env': [
{'name': 'env_variable', 'value': env_value},
]
}],
# 'imagePullSecrets': client.V1LocalObjectReference(name='regcred'), # together with a service-account, allows to access private repository docker image
'restartPolicy': restartPolicy,
'serviceAccountName': bots-service-account
}
}
print(f'POD MANIFEST:\n{pod_manifest}')
api_response = self.core_v1.create_namespaced_pod(body=pod_manifest, namespace=namespace)
while True:
api_response = self.core_v1.read_namespaced_pod(name=pod_name,
namespace=namespace)
if api_response.status.phase != 'Pending':
break
time.sleep(0.01)
print(f'From {os.path.basename(__file__)}: Pod {pod_name} in {namespace} created.')
return pod_name
For further investigation, refer to the examples in the official github repo: https://github.com/kubernetes-client/python/tree/master/examples
you can use the official Python Kubernetes client to create and manage the POD across the cluster programmatically.
https://github.com/kubernetes-client/python
You can keep one YAML file and replace the values into as per requirement like Deployment Name, Ports and apply the files to the cluster it will create the POD with base image.
Is there a simple way to create a configuration object for the Python Kubernetes client by passing a variable containing the YAML of the kubeconfig?
It's fairly easy to do something like:
from kubernetes import client, config, watch
def main():
config.load_kube_config()
or
from kubernetes import client, config, watch
def main():
config.load_incluster_config()
But I will like to create the config based on a variable with the YAML kubeconfig, Let's say I have:
k8s_config = yaml.safe_load('''
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://asdf.asdf:443
name: cluster
contexts:
- context:
cluster: cluster
user: admin
name: admin
current-context: admin
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tVGYUZiL2sxZlRFTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgU0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
''')
And I will like to load it as:
config.KubeConfigLoader(k8s_config)
The reason for this is that I can't store the content of the kubeconfig before loading the config.
The error I'm receiving is: "Error: module 'kubernetes.config' has no attribute 'KubeConfigLoader'"
They don't include KubeConfigLoader in the "pull up" inside config/__init__.py, which is why your kubernetes.config.KubeConfigLoader reference isn't working. You will have to reach into the implementation package and reference the class specifically:
from kubernetes.config.kube_config import KubeConfigLoader
k8s_config = yaml.safe_load('''...''')
config = KubeConfigLoader(
config_dict=k8s_config,
config_base_path=None)
Be aware that unlike most of my answers, I didn't actually run this one, but that's the theory
I have a rather simple test app:
import redis
import os
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def test_redis(event, context):
redis_endpoint = None
if "REDIS" in os.environ:
redis_endpoint = os.environ["REDIS"]
log.debug("redis: " + redis_endpoint)
else:
log.debug("cannot read REDIS config environment variable")
return {
'statusCode': 500
}
redis_conn = None
try:
redis_conn = redis.StrictRedis(host=redis_endpoint, port=6379, db=0)
redis_conn.set("foo", "boo")
redis_conn.get("foo")
except:
log.debug("failed to connect to redis")
return {
'statusCode': 500
}
finally:
del redis_conn
return {
'statusCode': 200
}
which I have deployed as a HTTP endpoint with serverless
#
# For full config options, check the docs:
# docs.serverless.com
#
service: XXX
plugins:
- serverless-aws-documentation
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
provider:
name: aws
stage: dev
region: eu-central-1
runtime: python3.6
environment:
# our cache
REDIS: xx-xx-redis-001.xxx.euc1.cache.amazonaws.com
functions:
hello:
handler: hello/hello_world.say_hello
events:
- http:
path: hello
method: get
# private: true # <-- Requires clients to add API keys values in the `x-api-key` header of their request
# authorizer: # <-- An AWS API Gateway custom authorizer function
testRedis:
handler: test_redis/test_redis.test_redis
events:
- http:
path: test-redis
method: get
When I trigger the endpoint via API Gateway, the lambda just times out after about 7 seconds.
The environmental variable is read properly, no error message displayed.
I suppose there's a problem connecting to the redis, but the tutorial are quite explicit - not sure what the problem could be.
The problem might need the need to set up a NAT, not sure how to accomplish this task with serverless
I ran into this issue as well. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
Try having the lambda in the same VPC and security group as your elastic cluster