I created a Machine Learning classifier with Python, using word2vec and I want to create an API to use it in production.
What's the easiest way to do that please ?
I heard of AWS Lambda and Microsoft Azure Machine Learning Studio but I am not sure it would work with word2vec.
For example with AWS Lambda, would I need to reload the libraries each time (it takes a while to do that).
And can I install any Python package with Microsoft Azure Machine Learning Studio and choose any kind of machine (need a lot of RAM for word2vec) ?
Thanks
By now, according to the offical document Execute Python machine learning scripts in Azure Machine Learning Studio about limitations for customizing Python installation (No.4 item), as below.
Inability to customize Python installation. Currently, the only way to add custom Python modules is via the zip file mechanism described earlier. While this is feasible for small modules, it is cumbersome for large modules (especially those with native DLLs) or a large number of modules.
Unfortunately, the python package like word2vec which includes some C modules could not be installed customize on Azure ML studio.
The only workaround way is to create a VM to install word2vec for Python and create a webservice in Python for calling in the Execute Python Script module of Azure ML studio via network IO, that as the answer of Azure ML Execute Python Module: Network I/O Disabled? said Azure ML studio support Network IO for Execute Python Script module now.
Related
I am working on an Azure Machine Learning Studio pipeline via the Designer. I need to install a Python library wheel (a third-party tool) in the same compute, so that I can import it into the designer. I usually install packages to compute instances via the terminal, but the Azure Machine Learning Studio designer uses a compute cluster, not a compute instance.
Is there any way to access the terminal so that I can install the wheel in the compute cluster and have access to the library via the designer? Thanks!
There isn't an easy path for this. Your options are either, switch to a code-first pipeline definition approach, or try your darndest to extend the Designer UI to meet your needs.
Define pipelines with v2 CLI or Python SDK
It looks like you're already outside of I get the impression that you know Python quite well, you should really check out the v2 CLI or the Python SDK for Pipelines. I'd recommend maybe starting with the v2 CLI as it will be the way to define AML jobs in the future.
Both require some initial learning, but will give you all the flexibility that isn't currently available in the UI.
custom Docker image
The "Execute Python Script" module allows use a custom python Docker image. I think this works? I just tried it but not with a custom .whl file, and it looked like it worked
I want to create a machine learning pipeline using python with PyCharm and run everything in azure machine learning service workspace. Then I want to integrate my pycharm script in a way when I edit and save my script, it runs a new experiment in Azure ML workspace.
I have check all the tutorials on using Azure ML service using python sdk, however, every time it is via notebooks but not with pycharm.
Azure Machine Learning service can be used from any editor that supports Python 3.5 - 3.7: PyCharm, VSCode or just plain python.exe. We've used Notebooks because it makes it easy to package and present the examples, however you should be able to copy-paste the Python code and run in any editor.
I am tring on microsoft azure ml to update pandas package.the problem is that i am trying to execute a python script and the supported python version are still at3.5 . unfortunately i did not notice that so my code is not execute cause is written on python 3.7 . so there is any way to update pandas in azure ml studio or the whole python ?? I have already try some solutions other stuckflow questions but are very old and not working
You can try the new "Visual Interface" in Azure ML service, which as an updated Python runtime. It is a new architecture over the UI of Azure ML Studio:
https://www.youtube.com/watch?v=QBPCaZo9xx0
I'll have a linux machine with a virtual machine installed for Microsoft azure soon. I need to run some data mining/graph analysis algorithms on the azure because I work with big data. I don't want to use azure machine learning stuff. just want to run my own python code. What are the steps? If needed, hoe can I install python libraries on azure?
There is no additional steps to do in comparison to Your own. local server. Linux on Azure is a standard Linux machine. If You are looking for step-by-step hopw to on running Linux VM on Azure, just search on azure.com and You will find it. I think You will not have any problems even without documentation. Azure portal is very simple to use, also CLI tool for Linux, Mac and Windows. You just need to run Linux VM and SSH-in to it. Nothing more. If You need some help, just write here.
I have a some old Fortran77 codes, which does some calculations. Now I build a website hosted by Google App Engine, and need to call those models' calculation results. Since I am new to both GAE and PiCloud, my basic questions are:
Should I first compile those Fortran77 code using a windows compiler?
Then, publish those models to PiCloud
Call from GAE?
Does my approach make sense? or Does PiCloud has Fortran77 environment, which can directly do the calculation without compiling first? If so, is there any example about this topic?
Thanks!
Picloud claim "install any library or binary written in any language", so it's safe to say you can run fortran programs on it. In fact, their homepage even says:
You can deploy any software written in any programming language
including C, C++, Java, R, Fortran, Ruby, etc.
You shouldn't compile it with a windows compiler, because picloud runs linux - compile it using a linux compiler, such as GCC.
Regarding using it from App Engine, see this page, where it says:
[...] use PiCloud from Google AppEngine, since our cloud client library is not supported on GAE.