I am working on an Azure Machine Learning Studio pipeline via the Designer. I need to install a Python library wheel (a third-party tool) in the same compute, so that I can import it into the designer. I usually install packages to compute instances via the terminal, but the Azure Machine Learning Studio designer uses a compute cluster, not a compute instance.
Is there any way to access the terminal so that I can install the wheel in the compute cluster and have access to the library via the designer? Thanks!
There isn't an easy path for this. Your options are either, switch to a code-first pipeline definition approach, or try your darndest to extend the Designer UI to meet your needs.
Define pipelines with v2 CLI or Python SDK
It looks like you're already outside of I get the impression that you know Python quite well, you should really check out the v2 CLI or the Python SDK for Pipelines. I'd recommend maybe starting with the v2 CLI as it will be the way to define AML jobs in the future.
Both require some initial learning, but will give you all the flexibility that isn't currently available in the UI.
custom Docker image
The "Execute Python Script" module allows use a custom python Docker image. I think this works? I just tried it but not with a custom .whl file, and it looked like it worked
Related
I have an mlflow ec2 instance running on aws. I want to develop an mlflow plugin to save registered models to a specific aws s3 bucket.
I have read all the documentation on plugins for mlflow and, if my understanding is correct, to develop and use a plugin i need two things:
Write the code for a pacakage following the mlflow plugins standards as in : https://www.mlflow.org/docs/latest/plugins.html
Change the tracking uri by adding file-plugin: at the beginning:
MLFLOW_TRACKING_URI=file-plugin:$(PWD)/mlruns python examples/quickstart/mlflow_tracking.py
Now, this is simple if I want the plugin to work on a python script. I just need to install my custom plugin package in my python environment and set the tracking uri as stated above.
However, if I want the same plugin to work when using the ui when connecting to my aws instance, I am not sure how to do it. I found no way to set the MLFLOW_TRACKING_URI to include file-plugin:.
Does anyone know how to solve this issue? How can I make sure my plugin works when interacting with mlflow through the ui?
Thanks in advance!
We working on a project which involves ML/AI integration to the native mobile application. We are programing our ML/AI code in python. Python code has dependencies, that we need to include in our mobile application.
We have tried with kivy but they only create .apk files and apk files can't be called from other apks. So, we need to create libraries that can be included in the android and ios projects.
Also, we tried chequopy but that doesn't support mediapipe which is in heart of our implementation.
Any guidance in that direction will go long way for us.
If your app was entirely self-contained in python including dependencies using recipes should be possible. If rewriting the native app is not an option maybe one idea is to serve the ML over an HTTP API running on a local server (eg flask). Quite cumbersome as users would need to install two apps
I have created one project which has machine learning and Signal processing functionality.
This project is running on server without any issue. My android device making API call to server and getting response.
I want this functionality to be run offline (Without Internet) without calling to remote API.
What are the possible way to run to Python functionality in the Android application?
Writing entire application in Java is not feasible because it depends on many python libraries like numpy, scipy, pandas, sklearn etc.
Maybe you can use Termux which is an Android terminal emulator and Linux environment app.
It comes with a package manager pkg which can be used to install Python.
pkg install python # or python2
It installs python and the pip package manager.
You can also find some useful information in wiki.python.org/moin/Android.
You can try Chaquopy, it allows intermixing of Python, Java and Kotlin. Furthermore it allows the use of cheeseshop (PyPi) packages such as the one you described.
You should be able to integrate your existing code with a Java application for Android.
https://chaquo.com/chaquopy/
It requires a commercial license if you don't want to opensource your code.
It is possible to use python for android project https://github.com/kivy/python-for-android. For rooted device or system app it is possible to launch python interpreter (compiled binaries) as a separate process with script as a parameter
My Python App Engine Flex application needs to connect to an external Oracle database. Currently I'm using the cx_Oracle Python package which requires me to install the Oracle Instant Client.
I have successfully run this locally (on macOS) by following the Instant Client installation steps. The steps required me to do the following:
Make a directory called /opt/oracle
Create a symlink from /opt/oracle/instantclient_12_2/libclntsh.dylib.12.1 to ~/lib/
However, I am confused about how to do the same thing in App Engine Flex (instructions). Specifically, here's what I'm confused about:
The instructions say I should run sudo yum install libaio to install the libaio package. How do I do this on GAE Flex? Or is this package already available?
I think I can add the Instant Client files to GAE (a whopping ~100MB!), then set the LD_LIBRARY_PATH environment variable in app.yaml to export LD_LIBRARY_PATH=/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH. Will this work?
Is this even feasible without using custom Docker containers on App Engine Flex?
Overall I'm not sure if I'm on the right track. Would love to hear from someone who has managed this before :)
If any of your dependencies is not available in the base GAE flex images provided by Google and cannot be installed via pip (because it's not a python package or it's not available in PyPI or whatever other reason) then you can't use the requirements.txt file to get it installed in your GAE flex app.
The proper way to satisfy such dependencies would be to build your own custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
Yes, that means providing a custom Docker file. In your particular case you'd be installing the Instant Client and libaio inside this Dockerfile. See also Building Custom Runtimes.
Answering your first question, I think that the instructions in the oracle website just show that you have to install said library for your application to work.
In the case of App engine flex, they way to ensure that the libraries are present in the deployment is with the requirements.txt textfile. There is a documentation page which does explain how to do so.
On the other hand, I will assume that "Instant Client Files" are not libraries, but necessary data for your App to run. You should use Google Cloud Storage to serve them, or any other alternative of Storage within Google Cloud.
I believe that, if this is all what you need for your App to work, pushing your own custom container should not be necessary.
I created a Machine Learning classifier with Python, using word2vec and I want to create an API to use it in production.
What's the easiest way to do that please ?
I heard of AWS Lambda and Microsoft Azure Machine Learning Studio but I am not sure it would work with word2vec.
For example with AWS Lambda, would I need to reload the libraries each time (it takes a while to do that).
And can I install any Python package with Microsoft Azure Machine Learning Studio and choose any kind of machine (need a lot of RAM for word2vec) ?
Thanks
By now, according to the offical document Execute Python machine learning scripts in Azure Machine Learning Studio about limitations for customizing Python installation (No.4 item), as below.
Inability to customize Python installation. Currently, the only way to add custom Python modules is via the zip file mechanism described earlier. While this is feasible for small modules, it is cumbersome for large modules (especially those with native DLLs) or a large number of modules.
Unfortunately, the python package like word2vec which includes some C modules could not be installed customize on Azure ML studio.
The only workaround way is to create a VM to install word2vec for Python and create a webservice in Python for calling in the Execute Python Script module of Azure ML studio via network IO, that as the answer of Azure ML Execute Python Module: Network I/O Disabled? said Azure ML studio support Network IO for Execute Python Script module now.