Mlflow "invocations" prefix - python

We are deploying some MLflow models using docker and Kubernetes and
We are using Ingress load balancer in K8S is mandatory for security reasons, but right now we need to deploy more than one mlflow image in the same cluster.
When we run the container the model serving start the application using "/invocations" path for the POST requests.
That means that we cannĀ“t differentiate the model using the prefix, cause every container is using the same prefix.
My question is,is there any way to change "/invocations" prefix on model Mlflow images?

Related

Consume a docker container inside Django docker container? Connecting two docker containers

I have a Django container and I want to consume another DL container inside it? For example, I have a Django app that predicting images classes and I want to make the prediction using a docker container and not a python library. That Django app will be containerised as well. In production, I will have three docker containers: Django container + Postgres container + YoloV5 container. How can I link the Django with the YoloV5 so that the prediction inside the Django will be done using the YoloV5?
I want to connect a deep learning container with Django container to make prediction using the DL container and not a python package.
The easiest way to do this is to make a network call to the other container. You may find it simplest to wrap the YoloV5 code in a very thin web layer, e.g. using Flask, to create an API. Then call that in your Django container when you need it using requests.
As suggested by Nick and others, the solution is: by calling the YoloV5 docker container inside Django container using host.docker.internal. I mean that inside Django container (views.py) I used host.docker.internal to call the YoloV5 container.

Connecting new Data sources to Prometheus Kubernetes

To get metrics from the pods inside a Kubernetes Cluster, I followed off of this guide: https://rpi4cluster.com/monitoring/monitor-intro/ , which worked very well.
However, what I do not understand is how to extend this to new data sources. That is, I am trying to monitor some additional apps that are not containers, and am running into difficulties.
What do I do in order to let kubernetes scrape metrics from specified sources?
For instance, what do I need to set up such that my Prometheus K3s deployment can scrape a python app, exposed on port 8000?

Tensorflow Serving: Is it ok that tensorflow serving relies on file system for storing the models

I see that the trained models are served from the file systems while using tensorflow serving, is it ok to rely on the file system alone.
Is there a mechanism to restore these models in case the we lose all these files?
You can create a custom serving image that has your model built into the container that you can deploy using docker and will load your model for serving on startup. Ref: TensorFlow Serving with Docker
You can also keep your model in Google cloud platform and use the GS path directly to serve models using below command.
tensorflow_model_server --port=8080 --model_base_path=gs://bucket_name/folder_models/

How to specify depends_on for connectivity between front end and backend app in k8s

I am trying to convert docker-compose to a native kubernetes manifest in my docker compose i have depends_on for my nginx which depends upon the backend mysql and python app.
In kubernetes how do i map the connection from my nginx(a static site code inside /var/www/html/app) to backend python and mysql using the k8s service or ENV ??
Do i need to specify any connection string in my nginx web app (static site code) or can it be managed using the ENV or how does the connection work ?
Thanks in Advance
As you can read here:
Kubernetes supports 2 primary modes of finding a Service - environment
variables and DNS. The former works out of the box while the latter
requires the CoreDNS cluster addon.
So you don't have to hardcode it. Instead you can refer in your application code to kubernetes Service which exposes your backend Pods for other Pods inside your cluster.
If you decide to use environment variables, remember that your Service must be created prior to your frontend Pods so the corresponding environment variables are set. If they were already running before you exposed your backend Pods via ClusterIP Service, no problem, you can recreate such Pods e.g. by scaling your Deployment in and out:
scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
If you've exposed your python-backend Deployment by running:
kubectl expose deployment python-backend
your Service will have the same name i.e. python-backend.
If you need to refer to it from any Pod in your cluster (e.g. frontend ones) you can use both ways:
Environment variables:
$PYTHON_BACKEND_SERVICE_HOST:80
or even:
$PYTHON_BACKEND_SERVICE_HOST:$PYTHON_BACKEND_SERVICE_PORT
if you think your backend application port may change for some reason.
DNS:
Use simply your Service name i.e. python-backend if both your frontend and backend Pods are running withing the same namespace or fully qualified domain name (FQDN) which has the following form:
<service-name>.<namespace>.svc.cluster.local
which in your case will be:
python-backend.default.svc.cluster.local
So having in mind that you added in your application code (or its configuration files) following connection strings:
either $PYTHON_BACKEND_SERVICE_HOST:80 or python-backend.default.svc.cluster.local:80
you only need to remember to name your Services accordingly.

how many docker containers should a java web app w/ database have?

I'm trying to "dockerize" my java web application and finally run the docker image on EC2.
My application is a WAR file and connects to a database. There is also a python script which the application calls via REST. The python side uses the tornado webserver
Question 1:
Should I have the following Docker containers?
Container for Application Server (Tomcat 7)
Container for HTTP Server (nginx of httpd)
Container for postgres db
Container for python script (this will have tornado web server and my python script).
Question 2:
What is the best way to build dockerfile? I will have to do trial and error for what commands need to be put into the dockerfile for each container. Should I have an ubuntu VM on which I do trial and error and once I nail down which commands I need then put them into the dockerfile for that container?
That list looks about right.
The advantage of splitting up your stack to separate containers is that you can (in many cases) use off-the-shelf official images, and only have to provide the right configuration to make them work together. In addition, you'd be able to upgrade the components (containers) separately.
Note that combining multiple services in a single container is not forbidden, but in Docker it's overall best practice to separate concerns, and have a
single container only be responsible for a single task/service.
To get all containers started with the right configuration, docker-compose is
a good choice; it enables you to create a single file (docker-compose.ymlhttps://docs.docker.com/compose/compose-file/) that
describes your project; which images to build for each container, how the containers relate to each-other, and pass configurations to them.
With docker-compose you can then start all containers by simply running
docker-compose up -d
You can use Docker Machine to create a Docker development environment on Mac or Windows. This is really good for trial and error. There is no need to for Ubuntu VM.
Docker container does one thing only. So your application would consist of multiple containers, one for each component. You've also clearly identified the different containers for your application. Here is how the workflow might look like:
Create a Dockerfile for Tomcat container, nginx, postgres, tornado
Deploy the application to Tomcat in Dockerfile or by mapping volumes
Create image for each of the container
Optionally push these images to Docker hub
If you plan to deploy these containers on multiple hosts then create an overlay network
Use Docker Compose to start these containers together. It would use the network created previously. Alternatively you can also use --x-networking for Docker Compose to create the network.

Categories