Deploy FastAPI microservice in Kubernetes via OpenFaaS - python

I have a big application structured with FastAPI (with many routers), that runs in AWS Lambda. I want to migrate it to a container inside Kubernetes. From my research, OpenFaaS is a great solution.
However I can't find documentation about how to do this.
Does anyone has references or a better solution?

If you are using the python or Ruby
You can create the docker file and use it for creating the docker images and simply deploy it on Kubernetes.
FROM ruby:2.7-alpine3.11
WORKDIR /home/app
COPY . .
RUN bundle install
CMD ["ruby", "main.rb"]
For OpenFass they have provided good labs with documentation to create the Async function etc.
Labs : https://github.com/openfaas/workshop
If you are looking for examples you can check out the official repo only : https://github.com/openfaas/faas/tree/master/sample-functions
Extra
There is also another good option Knative or Kubeless
You can find the python Kubeless example and CI/CD example : https://github.com/harsh4870/kubeless-kubernetes-ci-cd

Try use a template to build an upstream FastAPI application as an OpenFAAS function. This will create a docker image you can run and deploy in your Kubernetes cluster.
You can see how to do so in the following github repo

Related

How to deploy AWS using CDK, sagemaker?

I want to use this repo and I have created and activated a virtualenv and installed the required dependencies.
I get an error when I run pytest.
And under the file binance_cdk/app.py it describes the following tasks:
App (PSVM method) entry point of the program.
Note:
Steps tp setup CDK:
install npm
cdk -init (creates an empty project)
Add in your infrastructure code.
Run CDK synth
CDK bootstrap <aws_account>/
Run CDK deploy ---> This creates a cloudformation .yml file and the aws resources will be created as per the mentioned stack.
I'm stuck on step 3, what do I add in this infrastructure code, and if I want to use this on amazon sagemaker which I am not familiar with, do I even bother doing this on my local terminal, or do I do the whole process regardless on sagemaker?
Thank you in advance for your time and answers !
The infrastructure code is the Python code that you want to write for the resources you want to provision with SageMaker. In the example you provided for example the infra code they have is creating a Lambda function. You can do this locally on your machine, the question is what do you want to achieve with SageMaker? If you want to create an endpoint then following the CDK Python docs with SageMaker to identify the steps for creating an endpoint. Here's two guides, the first is an introduction to the AWS CDK and getting started. The second is an example of using the CDK with SageMaker to create an endpoint for inference.
CDK Python Starter: https://towardsdatascience.com/build-your-first-aws-cdk-project-18b1fee2ed2d
CDK SageMaker Example: https://github.com/philschmid/cdk-samples/tree/master/sagemaker-serverless-huggingface-endpoint

Is there a way to run an already-built python API from google cloud?

I built a functioning python API that runs from my local machine. I'd like to run this API from Google Cloud SDK, but after looking through the documentation and googling every variation of "run local python API from google cloud SDK" I had no luck finding anything that wouldn't involve me restructuring the script heavily. I have a hunch that "google run" or "API endpoint" might be what I'm looking for, but as a complete newbie to everything other than Firestore (which I would rather not convert my entire api into if I don't have to), I want to ask if there's a straightforward way to do this.
tl;dr The API runs successfully when I simply type "python apiscript.py" into local console, is there a way I can transfer it to Google Cloud without adjusting the script itself too much?
IMO, the easiest solution for portable app is to use Container. And to host the container in serverless mode, you can use Cloud Run.
In the getting started guide, you have python example. The main task for you is to create a Dockerfile
FROM python:3.9-slim
ENV PYTHONUNBUFFERED True
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN pip install -r requirements.txt
CMD python apiscript.py
I adapted the script to your description, and I assumed that you have a requirements.txt file for the dependencies.
Now, build your container
gcloud builds submit --tag gcr.io/<PROJECT_ID>/apiscript
Replace the PROJECT_ID by your project ID, not the name of the project (even if sometimes it's the same, it's a common mistake for the newcomers)
Deploy on Cloud Run
gcloud run deploy --region=us-central1 --image=gcr.io/<PROJECT_ID>/apiscript --allow-unauthenticated --platform=managed apiscript
I assume that your API is served on the port 8080. else you need to add a --port parameter to override this.
That should be enough
Here it's a getting started example, you can change the region, the security mode (here no security) the name and the project.
In addition, for this deployment, the Compute Engine default service account is used. You can use another service account if you want, but, in any cases, you need to grant the used service account the permission to access to the Firestore database.

how to run docker compose using docker python sdk

I would like to run docker-compose via python docker sdk.
However I couldn't find any reference on how to achieve this using these reference Python SDK? I could also use subprocess but I have some other difficulty while using that. see here docker compose subprocess
I am working on the same issue and was looking for answers, but nothing so far. The best shot I can give it is to simplify that docker-compose logic. For example, you have a YAML file with a network and services - create them separately using Python Docker SDK and connect containers to a network.
It gets cumbersome, but eventually you can get things working that way from Python.
I created a package to make this easy: python-on-whales
Install with
pip install python-on-whales
Then you can do
from python_on_whales import docker
docker.compose.up()
docker.compose.stop()
# and all the other commands.
You can find the source for the package in my GitHub repository: https://gabrieldemarmiesse.github.io/python-on-whales/

How to deploy AWS python Lambda project locally?

I got an AWS python Lambda function which contains few python files and also several dependencies.
The app is build using Chalice so by that the function will be mapped like any REST function.
Before the deployment in prod env, I want to test it locally, so I need to pack all this project (python files and dependencies), I tried to look over the web for the desired solution but I couldn't find it.
I managed to figrue how to deploy one python file, but a whole project did not succeed.
Take a look to the Atlassian's Localstack: https://github.com/atlassian/localstack
It's a full copy of the AWS cloud stack, locally.
I use Travis : I hooked it to my master branch in git, so that when I push on this branch, Travis tests my lambda, with a script that uses pytest, after having installed all its dependencies with pip install. If all the tests passed, it then deploy the lambda in AWS in my prod-env.

My first cloud project

A bit lost on where to start after exploring digitalcoean/aws.
I have looked at the documentation for docker and boto3, and docker seems to be the direction I want to go in (docker for AWS), but I am unsure that these are mutually exclusive solutions.
From what I understand the following workflow is possible:
Code local python (most any language, but I am using py)
Deploy local code (aka upload) to a server
Call that code from a local machine with some argument(s) via a script leveraging some cloud API (boto3/docker?)
Grab finished result file from my cloud (pull file that is JSON/CSV etc and contains my results) using an API (boto3/docker?)
I thought this would be way easier to get up and running (maybe it is, and I am just missing something).
I feel like I am hitting my head against the wall on something that is intended to not be so tough.
Any pointers/guidance are hugely appreciated.
Thank you!
boto3 is an interface to aws.
docker is a software tool for managing images and deploying them as containers.
You can use boto3 to create your amazon machine, and then install docker on that machine, and pull containers from a docker repository to run them.
There's also solutions like docker-machine(docker-toolbox for windows/mac) that can be used to create machines on amazon and then run your containers directly on that machine from your local docker repository.

Categories