How to use mlflow plugins in remote host ui - python

I have an mlflow ec2 instance running on aws. I want to develop an mlflow plugin to save registered models to a specific aws s3 bucket.
I have read all the documentation on plugins for mlflow and, if my understanding is correct, to develop and use a plugin i need two things:
Write the code for a pacakage following the mlflow plugins standards as in : https://www.mlflow.org/docs/latest/plugins.html
Change the tracking uri by adding file-plugin: at the beginning:
MLFLOW_TRACKING_URI=file-plugin:$(PWD)/mlruns python examples/quickstart/mlflow_tracking.py
Now, this is simple if I want the plugin to work on a python script. I just need to install my custom plugin package in my python environment and set the tracking uri as stated above.
However, if I want the same plugin to work when using the ui when connecting to my aws instance, I am not sure how to do it. I found no way to set the MLFLOW_TRACKING_URI to include file-plugin:.
Does anyone know how to solve this issue? How can I make sure my plugin works when interacting with mlflow through the ui?
Thanks in advance!

Related

How to enable Docker Build Context in azure machine learning studio?

I'm trying to create an environment from a custom Dockerfile in the UI of Azure Machine Learning Studio. It previously used to work when I used the option: Create a new Docker context.
I decided to do it through code and build the image on compute, meaning I used this line to set it:
ws.update(image_build_compute = "my_compute_cluster")
But now I cannot create any environment through the UI and the docker build context anymore. I tried setting back the property image_build_compute to None or False but it doesn't work either.
Also tried deleting the property through the cli but also doesn't work. I checked on another machine learning workspace and this property doesn't exists.
Is there a way for me to completely remove this property or enable again the docker build context?
Created compute cluster with some specifications and there is a possibility to update the version of the cluster and checkout the code block.
workspace.update(image_build_compute = "Standard_DS12_v2")
We can create the compute instance using the UI of the portal using the following steps using the docker.
With the above procedure we can get to confirm that the environment was created using the docker image and file.

Enable Live Metrics on Application Insights for a Docker based Python Function App

I have a Docker based Python Function App running, which is connected to an Application Insights resource. I get all the usual metrics, but the Live Metrics fails telling me "Not available: your app is offline or using an older SDK".
I am using the azure-functions/python:4-python3.9-appservice image as a base. If I remember correctly I was able to view Live Metrics when I simply deployed a Function App via ZIP deploy, but since switching to Docker this option has disappeared. Online I'm not able to find the right information to fix this or to determine if it is even possible.
AFAIK, currently Live Metric Stream for Python is not supported.
The MSDOC says that currently supported languages are .NET, Java and Node.js.
For achieving this you can refer the alternate solution given by #AJG for that you have to create a LogHandler and write the messages into Cosmos DB container. It will stream into console.

Azure function local setup in PyCharm and publish to Azure

I've just switched gears to Azure esp. started working on Azure functions. It's very straightforward to do the deployment using VSCode but I've been unable to find any comprehensive working end-to-end doc/resource about how to set azure function in pycharm, publish it from PyCharm to azure cloud and debug it locally.
I've looked in Microsoft docs but couldn't find anything of value for setting up Azure Func in PyCharm. Could you please suggest if it is possible to do it in PyCharm or do I have to switch to VSCode? (Don't wanna switch just because of Azure Functions though).
PS: If it is possible to set it up in PyCharm, link or details of how-to will be helpful.
Thanks in advance for help.
Azure Function is now supported in Rider, WebStorm and IntelliJ and it supports TypeScript, Node, C#, Python and Java. But PyCharm most likely be the only JetBrains product that doesn't have a single step setup to run and debug Azure Functions.
Currently there is no Azure Function Extension with the help of which functions can be created, debugged and published from PyCharm.
If you don't want to switch to VS Code then for now you might like using IntelliJ for running, debugging and publishing Azure Functions.
Check this discussion to get more information. Also check this approach to debug it locally, it can be considered as a workaround but not solution.
At the moment Pycharm does not integrate directly Azure Functions.
I've set up a DevOps Pipeline in my function, so everytime I need to run and test it I push my code from Pycharm on a dedicated branch and the function is deployed on Azure.

Installing a Python library wheel in an Azure Machine Learning Compute Cluster

I am working on an Azure Machine Learning Studio pipeline via the Designer. I need to install a Python library wheel (a third-party tool) in the same compute, so that I can import it into the designer. I usually install packages to compute instances via the terminal, but the Azure Machine Learning Studio designer uses a compute cluster, not a compute instance.
Is there any way to access the terminal so that I can install the wheel in the compute cluster and have access to the library via the designer? Thanks!
There isn't an easy path for this. Your options are either, switch to a code-first pipeline definition approach, or try your darndest to extend the Designer UI to meet your needs.
Define pipelines with v2 CLI or Python SDK
It looks like you're already outside of I get the impression that you know Python quite well, you should really check out the v2 CLI or the Python SDK for Pipelines. I'd recommend maybe starting with the v2 CLI as it will be the way to define AML jobs in the future.
Both require some initial learning, but will give you all the flexibility that isn't currently available in the UI.
custom Docker image
The "Execute Python Script" module allows use a custom python Docker image. I think this works? I just tried it but not with a custom .whl file, and it looked like it worked

How do I connect to an external Oracle database using the Python cx_Oracle package on Google App Engine Flex?

My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.

Categories