Connecting new Data sources to Prometheus Kubernetes - python

To get metrics from the pods inside a Kubernetes Cluster, I followed off of this guide: https://rpi4cluster.com/monitoring/monitor-intro/ , which worked very well.
However, what I do not understand is how to extend this to new data sources. That is, I am trying to monitor some additional apps that are not containers, and am running into difficulties.
What do I do in order to let kubernetes scrape metrics from specified sources?
For instance, what do I need to set up such that my Prometheus K3s deployment can scrape a python app, exposed on port 8000?

Related

Getting EKS namespaces using boto3

I have several accounts in aws, which i control daily. In some of them i have eks clusters. Usually, if i want to see the available namespaces in the cluster, i’m logging into the cluster and then run kubectl get ns from windows terminal.
How can i use this command with python?
(I’m using boto3 to write my query in aws, and trying to do everything within boto3 module)
I’ve already entered inside the cluster, but using describe_cluster i don’t have that info i’m looking for
You can not get cluster resources with boto3. With additional libraries you can get k8s cluster resources with python. kubernetes, eks_token libraries can help you.
An example usage

Enable Live Metrics on Application Insights for a Docker based Python Function App

I have a Docker based Python Function App running, which is connected to an Application Insights resource. I get all the usual metrics, but the Live Metrics fails telling me "Not available: your app is offline or using an older SDK".
I am using the azure-functions/python:4-python3.9-appservice image as a base. If I remember correctly I was able to view Live Metrics when I simply deployed a Function App via ZIP deploy, but since switching to Docker this option has disappeared. Online I'm not able to find the right information to fix this or to determine if it is even possible.
AFAIK, currently Live Metric Stream for Python is not supported.
The MSDOC says that currently supported languages are .NET, Java and Node.js.
For achieving this you can refer the alternate solution given by #AJG for that you have to create a LogHandler and write the messages into Cosmos DB container. It will stream into console.

Use Neo4j in Azure Devops or Azure Machine learning

I have a project in python. My goal is to create an article recommendation with Neo4j.
Here is what I did:
Web scraping articles
Clean the data
Insert the data into Neo4j
Use Neo4j algorithms (graph data science library)
Everything is working, but only on my laptop.
I would like to migrate my project on Azure devops and azure machine learning in order to have a web app.
I have an Azure account and I have created Azure ML and Azure Devops for this project but I don't know how I can use Neo4j in azure devops and azure machine learning. Maybe with a VM or a container ?
I don't know much about Neo4j but if this were my project I would focus on setting this up as a experiment pipeline run. You can go through all of your individual steps one-by-one.
your pipe steps:
Web collection
cleaning
Neo4j
I feel like you need a "modeling" step in there, or else why are you using AML?
Then deploy it to a web point in AML service.
Here is my example of script steps. You just need to replace the individual PythonScriptSteps with your code.

How to specify depends_on for connectivity between front end and backend app in k8s

I am trying to convert docker-compose to a native kubernetes manifest in my docker compose i have depends_on for my nginx which depends upon the backend mysql and python app.
In kubernetes how do i map the connection from my nginx(a static site code inside /var/www/html/app) to backend python and mysql using the k8s service or ENV ??
Do i need to specify any connection string in my nginx web app (static site code) or can it be managed using the ENV or how does the connection work ?
Thanks in Advance
As you can read here:
Kubernetes supports 2 primary modes of finding a Service - environment
variables and DNS. The former works out of the box while the latter
requires the CoreDNS cluster addon.
So you don't have to hardcode it. Instead you can refer in your application code to kubernetes Service which exposes your backend Pods for other Pods inside your cluster.
If you decide to use environment variables, remember that your Service must be created prior to your frontend Pods so the corresponding environment variables are set. If they were already running before you exposed your backend Pods via ClusterIP Service, no problem, you can recreate such Pods e.g. by scaling your Deployment in and out:
scale deployment my-nginx --replicas=0; kubectl scale deployment my-nginx --replicas=2;
If you've exposed your python-backend Deployment by running:
kubectl expose deployment python-backend
your Service will have the same name i.e. python-backend.
If you need to refer to it from any Pod in your cluster (e.g. frontend ones) you can use both ways:
Environment variables:
$PYTHON_BACKEND_SERVICE_HOST:80
or even:
$PYTHON_BACKEND_SERVICE_HOST:$PYTHON_BACKEND_SERVICE_PORT
if you think your backend application port may change for some reason.
DNS:
Use simply your Service name i.e. python-backend if both your frontend and backend Pods are running withing the same namespace or fully qualified domain name (FQDN) which has the following form:
<service-name>.<namespace>.svc.cluster.local
which in your case will be:
python-backend.default.svc.cluster.local
So having in mind that you added in your application code (or its configuration files) following connection strings:
either $PYTHON_BACKEND_SERVICE_HOST:80 or python-backend.default.svc.cluster.local:80
you only need to remember to name your Services accordingly.

Deploying Python script daily on Azure

I have a Python script that pulls some data from an Azure Data Lake cluster, performs some simple compute, then stores it into a SQL Server DB on Azure. The whole shebang runs in about 20 seconds. It needs sqlalchemy, pandas, and some Azure data libraries. I need to run this script daily. We also have a Service Fabric cluster available to use.
What are my best options? I thought of containerizing it with Docker and making it into an http triggered API, but then how do I trigger it 1x per day? I'm not good with Azure or microservices design so this is where I need the help.
You can use Web Jobs in App Service. It has two types of Azure Web Jobs for you to choose: Continuous and Trigger. As I see you need the type Trigger
You could refer to the document here for more details.In addition, here shows how to run tasks in WebJobs.
Also, you can use Azure function timer-based on python which was made generally available in recent months.

Categories