How to deploy AWS using CDK, sagemaker? - python

I want to use this repo and I have created and activated a virtualenv and installed the required dependencies.
I get an error when I run pytest.
And under the file binance_cdk/app.py it describes the following tasks:
App (PSVM method) entry point of the program.
Note:
Steps tp setup CDK:
install npm
cdk -init (creates an empty project)
Add in your infrastructure code.
Run CDK synth
CDK bootstrap <aws_account>/
Run CDK deploy ---> This creates a cloudformation .yml file and the aws resources will be created as per the mentioned stack.
I'm stuck on step 3, what do I add in this infrastructure code, and if I want to use this on amazon sagemaker which I am not familiar with, do I even bother doing this on my local terminal, or do I do the whole process regardless on sagemaker?
Thank you in advance for your time and answers !

The infrastructure code is the Python code that you want to write for the resources you want to provision with SageMaker. In the example you provided for example the infra code they have is creating a Lambda function. You can do this locally on your machine, the question is what do you want to achieve with SageMaker? If you want to create an endpoint then following the CDK Python docs with SageMaker to identify the steps for creating an endpoint. Here's two guides, the first is an introduction to the AWS CDK and getting started. The second is an example of using the CDK with SageMaker to create an endpoint for inference.
CDK Python Starter: https://towardsdatascience.com/build-your-first-aws-cdk-project-18b1fee2ed2d
CDK SageMaker Example: https://github.com/philschmid/cdk-samples/tree/master/sagemaker-serverless-huggingface-endpoint

Related

Remove CodePipeline Update from CloudFormation stack's deploy

I'm currently using CDK Synth in order to generate templates that I deploy with a sam deploy command.
The idea behind my work is to deploy my stacks from a GitLab runner, without the use of CodePipeline (which is the actual method of deployement).
I manage to deploy my stack as I want using the template located in the cdk.out, after the CDK Synth command. However, the deploy is triggering the CodePipeline because the template contains a ""Type": AWS::CodePipeline::Pipeline" section, which leads to a " * Modify cdkpipeline AWS::CodePipeline::Pipeline operation in the sam deploy command of the stack.
I want to remove this operation in the deploy section, so that the CodePipeline is not triggered.
I can't modify the code in himself, because I need to keep both deployment methods alive.
I also can't use the --guided prompts from the sam deploy command because the scripts are executed from a GitLab runner.
Do you have any clue on achieving this, by passing parameters on the deploy (other libraries are fine to me if it works), or by using a tool that deletes the CodePipeline section in the templates ? Maybe something exists aswell to prevent CDK Synth to generate a CodePipeline section ?
I'm open to any proposal.
Thank you already

Can you call AWS CDK in python without CLI?

Is it possible to run the AWS CDK lifecycle directly in python (or any other language), without using the cli? e.g.
app.py
app = cdk.App()
Stack(app, ..)
app.synth()
app.deploy()
run python
python app.py //instead of cdk deploy...
this could be useful to prepare temporary test scenarios.
What's possible as of today:
Programatically synth templates without cdk synth?
Yes. CDK's testing constructs work this way. Tests assert against a programatically synth-ed template.
template = Template.from_stack(processor_stack)
template.resource_count_is("AWS::SNS::Subscription", 1)
Programatically deploy apps without cdk deploy?
Not in the way the OP describes, but you can get close with the CDK's CI/CD constructs. The CDK Pipelines construct synths and deploys CDK apps programatically. Build a pipeline that builds and tears-down a testing environment on each push to your remote repository. Alternatively, a standalone CodeBuild Project can be used with commands to cdk deploy an app and perform tests when triggered by an event.
Also worth mentioning is cdk watch for rapid local development iterations. It automatically deploys incremental changes as you code.

Deploy FastAPI microservice in Kubernetes via OpenFaaS

I have a big application structured with FastAPI (with many routers), that runs in AWS Lambda. I want to migrate it to a container inside Kubernetes. From my research, OpenFaaS is a great solution.
However I can't find documentation about how to do this.
Does anyone has references or a better solution?
If you are using the python or Ruby
You can create the docker file and use it for creating the docker images and simply deploy it on Kubernetes.
FROM ruby:2.7-alpine3.11
WORKDIR /home/app
COPY . .
RUN bundle install
CMD ["ruby", "main.rb"]
For OpenFass they have provided good labs with documentation to create the Async function etc.
Labs : https://github.com/openfaas/workshop
If you are looking for examples you can check out the official repo only : https://github.com/openfaas/faas/tree/master/sample-functions
Extra
There is also another good option Knative or Kubeless
You can find the python Kubeless example and CI/CD example : https://github.com/harsh4870/kubeless-kubernetes-ci-cd
Try use a template to build an upstream FastAPI application as an OpenFAAS function. This will create a docker image you can run and deploy in your Kubernetes cluster.
You can see how to do so in the following github repo

How to Kickstart Kubeflow Pipeline development in Python

I have been studying about Kubeflow and trying to grasp how do I write my first hollo world program in it and run locally on my mac. I have kfp and kubectl installed locally on my machine. For testing purpose I want to write a simple pipeline with two functions: get_data() and add_data(). The doc is overwhelming that I am not clear how to program locally without k8s installed, connecting remote GCP machine and debug locally before creating zip and upload or there way to execute code locally and see how is it running on Google cloud?
Currently you need Kubernetes to run KFP pipelines.
The easiest way to deploy KFP is when you use the Google Cloud Marketplace
Alternatively you can locally install Docker Desktop which includes Kubernetes and install standalone version of KFP on it.
After that you can try this tutorial: Data passing in python components
Actually you can install a reduced version of kubeflow with minikf. More info https://www.kubeflow.org/docs/distributions/minikf/minikf-vagrant/
Check whether you are using kubeflow pipelines from the google cloud marketplace, or a custom kubernetes cluster. If you are using the managed one, you can see your pipeline running through the kubeflow pipelines management console.
for details about how to create components based on functions, you can check https://www.kubeflow.org/docs/components/pipelines/sdk/python-function-components/#getting-started-with-python-function-based-components

How to deploy AWS python Lambda project locally?

I got an AWS python Lambda function which contains few python files and also several dependencies.
The app is build using Chalice so by that the function will be mapped like any REST function.
Before the deployment in prod env, I want to test it locally, so I need to pack all this project (python files and dependencies), I tried to look over the web for the desired solution but I couldn't find it.
I managed to figrue how to deploy one python file, but a whole project did not succeed.
Take a look to the Atlassian's Localstack: https://github.com/atlassian/localstack
It's a full copy of the AWS cloud stack, locally.
I use Travis : I hooked it to my master branch in git, so that when I push on this branch, Travis tests my lambda, with a script that uses pytest, after having installed all its dependencies with pip install. If all the tests passed, it then deploy the lambda in AWS in my prod-env.

Categories