Using kubectl rollout restart equivalent with k8s python client - python

I am trying to develop a AWS lambda to make a rollout restart deployment using the python client. I cannot find any implementation in the github repo or references. Using the -v in the kubectl rollout restart is not giving me enough hints to continue with the development.
Anyways, it is more related to the python client:
https://github.com/kubernetes-client/python
Any ideas? perhaps I could be missing something

The python client interacts directly with the Kubernetes API. Similar to what kubectl does. However, kubectl added some utility commands which contain logic that is not contained in the Kubernetes API. Rollout is one of those utilities.
In this case that means you have two approaches. You could reverse engineer the API calls the kubectl rollout restart makes. Pro tip: With go, you can actually import internal Kubectl behaviour and libraries, making this quite easy. So consider writing your lambda in golang.
Alternatively, you can have your Lambda call the Kubectl binary (using the process exec libraries in python). However, this does mean you need to include the binary in your lambda in some way (either by uploading it with your lambda or by building a lambda layer containing kubectl).

#Andre Pires, it can be done like this way :
data := fmt.Sprintf(`{"spec":{"template":{"metadata":{"annotations":{"kubectl.kubernetes.io/restartedAt":"%s"}}}},"strategy":{"type":"RollingUpdate","rollingUpdate":{"maxUnavailable":"%s","maxSurge": "%s"}}}`, time.Now().String(), "25%", "25%")
newDeployment, err := clientImpl.ClientSet.AppsV1().Deployments(item.Pod.Namespace).Patch(context.Background(), deployment.Name, types.StrategicMergePatchType, []byte(data), metav1.PatchOptions{FieldManager: "kubectl-rollout"})

Related

How to code a serverless AWS lambda function that will download a linux third party application using wget and then execute commands from that app?

I would like to use a serverless lambda that will execute commands from a tool called WSO2 API CTL as I would on linux cli. I am not sure of how to mimic the downloading and calling of the commands as if I were on a linux machine using either Nodejs or Python via the lambda?
I am okay with creating and setting up the lambda and even getting it in the right VPC so that the commands will reach an application on an EC2 instance but I am stuck at how to actually execute the linux commands using either Nodejs or Python and which one would be better, if any.
After adding the following I get an error trying to download:
os.system("curl -O https://apim.docs.wso2.com/en/latest/assets/attachments/learn/api-controller/apictl-3.2.1-linux-x64.tar.gz")
Warning: Failed to create the file apictl-3.2.1-linux-x64.tar.gz: Read-only
It looks like there is no specific reason to download apictl during the initialisation of your Lambda. Therefore, I would propose to bundle it with your deployment package.
The advantage of this approach are:
Quicker initialisation
Less code in your Lambda
You could extend your CI/CD pipeline to download the application during build and then add it to your ZIP archive that you deploy.

Can you run python code with RAY in AWS Lambda, remotely from an IDE (eg. PyCharm)?

Keen to run a library of python code, which uses "RAY", on AWS Lambda / a serverless infrastructure.
Is this possible?
What I am after:
- Ability to run python code (with RAY library) on serverless (AWS Lambda), utilising many CPUs/GPUs
- Run the code from a local machine IDE (PyCharm)
- Have graphics (eg. Matplotlib) display on the local machine / in the local browser
Consideration is that RAY does not run on Windows.
Please let me know if this is doable (and if possible, best approach to set up).
Thank you!
CWSE
AWS Lambda
AWS Lambda doesn't have GPU support and is tragically suited for distributed training of neural networks. It's maximum run time is 15 minutes, they don't have enough memory to hold dataset (maybe small part of it only).
You may want AWS Lambda for lightweight inference jobs after your neural network/ML model was trained.
As AWS Lambda autoscales it would be well suited for tasks like single image classification and immediate return for multiple users.
Ray
What you should be after for parallel and distributed training are AWS EC2 instances. For deep learning p3 isntances might be a good choice due to Tesla V100 offering. For more CPU heavy load, c5 instances might be a good fit.
When it comes to Ray it indeed doesn't support Windows, but it supports Docker (see installation guide). You may log into container with ray preconfigured after mounting/copying your source code into container with this command:
docker run -t -i ray-project/deploy
and run it from there. For docker installation on Windows see here. It should be doable this way. If not, use some other docker image like ubuntu, setup everything you need (ray and other libraries) and run from within container (or better yet, make the container executable so it outputs to your console as you wanted).
It should be doable this way.
If not, you may manually log into small AWS EC2 instance, setup your environment there and run as well.
You may wish to check this friendly introduction to settings and ray documentation to get info how to configure your exact use case.
import boto3, json
#pass profile to boto3
boto3.setup_default_session(profile_name='default')
lam = boto3.client('lambda', region_name='us-east-1')
payload = {
"arg1": "val1",
"arg2": "val2"
}
payloadJSON = json.dumps(payload)
lam.invoke(FunctionName='some_lambda', InvocationType='Event', LogType='None', Payload=payloadJSON)
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.invoke
If you have a creds file you can cat the ~/.aws/credentials file and you can get your role for the session setup.
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html

Nvidia-Docker API for Python?

I am currently running a lot of similar Docker containers which are created and run by a Python script via the official API. Since Docker natively doesn't support GPU mapping, I tested Nvidia-Docker, which fulfills my requirements, but I'm not sure how to integrate it seamlessly in my script.
I tried to find the proper API calls for Nvidia-Docker using Google and the docs, but I didn't manage to find anything useful.
My current code looks something like this:
# assemble a new container using the params obtained earlier
container_id = client.create_container(img_id, command=commands, stdin_open=True, tty=True, volumes=[folder], host_config=client.create_host_config(binds=[mountpoint,]),detach=False)
# run it
client.start(container_id)
The documentation for the API can be found here.
From Nvidia-Dockers Github page:
The default runtime used by the DockerĀ® Engine is
runc, our runtime can become the default one by configuring the docker
daemon with --default-runtime=nvidia. Doing so will remove the need to
add the --runtime=nvidia argument to docker run. It is also the only
way to have GPU access during docker build.
Basically, I want to add the --runtime=nvidia-docker argument to my create_container call, but there is no support for that as it seems.
But since I need to switch between runtimes multiple times during the script execution (mixing Nvidia-Docker and native Docker containers) the quick and dirty way would be to run a bash command using subprocess but I feel like there has to be a better way.
TL;DR: I am looking for a way to run Nvidia-Docker containers from a Python script.
run() and create() methods have runtime parameter according to https://docker-py.readthedocs.io/en/stable/containers.html
Which has sense because docker cli tool is pretty simple and every command translate in a call to the docker engine service REST API

Automate CloudFoundry Deployment with python

I am new to Cloud Foundry. I want to automate the application deployment and service binding in Cloud Foundry with Python.
For deploying an application in Cloud Foundry we will use the commands (Cloud Foundry CLI) like:
cf push redis-sample-app
cf create-service redis shared-vm service-example-redis
cf bind-service redis-sample-app service-example-redis
cf restage redis-sample-app
Now I don't want to use the CLI for that, I just want to write a Python/Ruby/(any language) script which will do all the things.
I have tried google and ended up with Python cloudfoundry module, but it's not clear to go on. Is there any API for my task, like boto for accessing EC2. I have tried following code in Python:
from cloudfoundrty import CloudFoundryInterface
cf=CloudFoundryInterface(target="api.end.point",username="myusername",password="mypwd")
cf.login()
It's showing the error:
`File "C:\Python27\lib\site-packages\requests\models.py", line 398, in full_url
raise MissingSchema("Invalid URL %r: No schema supplied" % url)
MissingSchema: Invalid URL u'users/kishorekumarnetala%40gmail.com/tokens': No schema supplied`
First, a quick thing, what is the actual API endpoint of your Cloud Foundry deployment? If you're using the cf CLI, what did you put when you did cf api API_ENDPOINT? You can run cf target to see what the current API endpoint is set to. It should have a scheme like http or https. If you're actually putting api.end.point in your Python code, that's why you're getting the error message you're seeing.
As for your general question about automating Cloud Foundry interactions, you have a few options:
Write a shell script that directly drives the cf CLI
Write a module in a higher-level language like Ruby or Python that simply wraps calls to the CLI
Write a module in a higher-level language that wraps calls to the restful API.
Here's a breakdown of those options:
If your list of languages (Ruby/Python/any language) included things like bash or pure sh, then you can easily use that to have "code" that automates interacting with Cloud Foundry. The CLI is designed to be scriptable, and not require human interaction. This is the most common approach, since the CLI is designed for this use case.
If you want to drive interactions via a different language (e.g. maybe because this is part of a larger project that's already in a different language), you can certainly do that. The full suite of highest level system tests for Cloud Foundry does this in Golang. If you're familiar with navigating Golang projects, you can look at:
the package that drives the CLI
the test suites that use that package
You can also build a wrapper around the RESTful HTTP API. There are also several out there already in the ecosystem:
Here is a recent thread about an official supported Java client
Someone in the community has been developing a node.js client for their own purposes (not sure if it's public though)
There used to be a Ruby gem but it I believe it is deprecated, but you may be able to find it and look at it for ideas

Small library for remote command execution similar to Fabric

I currently have my own set of wrappers around Paramiko, a few functions that log the output as a command gets executed, reboot some server, transfer files, etc. However, I feel like I'm reinventing the wheel and this should already exist somewhere.
I've looked into Fabric and it partially provides this, but its execution model would force me to rewrite a big part of my code, especially because it shares information about the hosts in a global variables and doesn't seem to be originally intended to be used as a library.
Preferably, each server should be represented by an object, so I could save state about it and run commands using something like server.run("uname -a"), provide some basic tools like rebooting, checking for connectivity, transferring files and ideally even give me some simple way to run a command on a subset of servers in parallel.
Is there already some library that provides this?
Look at Ansible: 'minimal ssh command and control'. From their description: 'Ansible is a radically simple configuration-management, deployment, task-execution, and multinode orchestration framework'.
Fabric 2.0 (currently in development) will probably be similar to what you have in mind.
Have a look at https://github.com/fabric/fabric/tree/v2

Categories