Azure function for python is unreachable - python

I am getting the below error in the Azure Function for Python
Please see the below screenshot
Whenever I am trying to open the azure function python on the portal then I got the above error.
Let me know if anyone has any idea regarding this error.

This error can be very difficult to debug, because it seems to be caused by multiple root causes. In my case, I suspect the root cause to be a failure in pip package installation. But this is difficult to verify, because I was not able to drill in to the pip logs. The deployment log does not contain information about pip installation, and some of the logs were unavailable because the host runtime was down.
I followed these best practices to finally make the Python function deployment succeed:
Use remote build (app setting: SCM_DO_BUILD_DURING_DEPLOYMENT: 1)
Make sure that the AzureWebJobsStorage application setting is configured to point to the correct Function Storage
Do not include local .venv/ directory in deployment (add it to .funcignore)
Make sure the dependencies can be installed on the local Virtual Environment without conflicts
Test that the function runs locally without errors
In requirements.txt, I had the following lines. Note that there is no need to specify azure-functions version, since it is determined by the platform. It is only for local linting etc.
pip==21.2.*
azure-functions
As a side note, it is not necessary to specify "Build from package" (app setting: WEBSITE_RUN_FROM_PACKAGE: 1); this seems to be enabled by default.
My deployment configuration:
OS: Ubuntu 21.04
Functions Python version: 3.9
Functions Runtime Extension version: 4
Deployed with VS Code Azure extension

Related

Is there anyway to run and deploy ubuntu packages on Azure functions Startup?

In my Az Function app, I have some ubuntu packages like Azure CLI and Kubectl that I need to install on the AZ Host whenever it starts a new container. I have already tried Start-up Commands and also going into the Bash. The former doesnt work and the latter tells me permission is denied and resource is locked. Is there any way to install these packages on function start-up in Azure Functions?
If you try to install the package via bash, it is impossible and will not be dealt with at all. The reason is because when you use python to write functions and deploy them to linux os on azure, in fact it installs various packages according to requirements.txt, and finally merges these packages into a whole. When you run the function on azure, you are based on this whole package. Therefore, if it is incorrect to try to install the package after deployment, you should specify the package to be installed in requirements.txt before deployment and then deploy to azure.

Central Directory Corrupt deploying Python Azure function

I was previously able to deploy an Azure function written in Python using the command func azure functionapp publish <FunctionAppName> from my project directory, building it remotely. It worked until lunchtime yesterday.
I now get the following message.
Creating archive for current directory...
Performing remote build for functions project.
Deleting the old .python_packages directory
Uploading [######################################################################################]
Remote build in progress, please wait...
Fetching changes.
Cleaning up temp folders from previous zip deployments and extracting pushed zip file /tmp/zipdeploy/c5e66350-4b87-4e72-9900-b2a1ae4521a8.zip (0.00 MB) to /tmp/zipdeploy/extracted
Central Directory corrupt.
Remote build failed!
I've tried the following to see if I can resolve it without any success:
Switching my machine off and on.
Deploying an older version of the code in case I've changed anything.
Deploying from the command prompt in visual studio code.
Reinstalling Azure functions core tools.
Deploying from a different machine on a different network (I read that there are sometimes firewall issues with uploading zip files but my IT manager assures me we have no restrictions and these settings have not been changed). In doing so, I had to install Azure functions core tools from scratch as it had never been installed on that machine before.
Creating a completely new clean functionapp and deploying there.
Creating a brand new minimal Python application in a clean directory and deploying this to the new functionapp.
I get the same message in each case.
I'm stuck here. Does anyone have any more information about what the error message might mean is going wrong or any ideas?
Other investigations -
I've tried deploying as a different Azure user (same error).
I've checked for any processes using port 9091 (none were found).
I have also tried to build locally using func azure functionapp publish IncidentProcessing4 --build local
I got some different error messages
Performing local build for functions project.
Directory .python_packages already in sync with requirements.txt. Skipping restoring dependencies...
Uploading package...
Uploading 0 B [###################################################################################]
Attempted to divide by zero.
Retry: 1 of 3
Uploading 0 B [###################################################################################]
Attempted to divide by zero.
Retry: 2 of 3
Uploading 0 B [###################################################################################]
Attempted to divide by zero.
Retry: 3 of 3
Uploading 0 B [###################################################################################]
Attempted to divide by zero.
I noticed that in my \users\name\appdata\local\temp directory 2 files had been created by the build, called temp374D.tmp and tmp374E.tmp. The first of these was 0KB in size and the 2nd 8KB in size.
My suspicion is something is causing the first file to be created and something on the server is attempting and failing to unzip it.
More additional information - "Deploy to Function App" from Visual Studio code deploys, but when the Azure function runs, I get errors about modules referenced by the function not being loaded. If it's possible to deploy the modules in requirements.txt with the function app that will be a work around.
I think I've fixed it by reverting to an earlier version of Azure Functions Core Tools. It's deploying with 2.7.1575.
I'll experiment to see if I can find out any more but I've got my function deploying now.
This may not apply, given your exhaustive set of things you did trying to diagnose the issue, but I'll throw it out there. I have run into this if my function is running, either in a terminal window (via func host start) or even via the VSCode functions extension.
It would seem like it should be obvious to see if anything's running, but I've seen VSCode leave behind function host processes in a running state. To check, you could check netstat (netstat -a -n | grep 9091) or lsof (lsof -nP -iTCP:9091 | grep LISTEN). The latter gives you the pid that has the port open.
It seems that there was a bug introduced in Azure Functions Core Tools such that if your function directory full path had a space character in it Eg- /home/my functions project/, it would create a zip package with 0 bytes.
See - https://github.com/Azure/azure-functions-core-tools/issues/1867
This will be fixed in the next release. In the meanwhile, anyone experiencing this can mitigate the problem if they switch their function project to be in a path without space characters.
Sorry about that!
I am using core-tools version 3 and the same problem exists there as well. As Ankit mentioned in his answer, the issue occurs when there's a space in the project path.
One solution to this could be to move your project to another path without spaces until the next release fixes the issue but this is not always feasible/desirable. Instead we can create a shortcut (aka junctions on windows systems) to the original folder in a new path without spaces. So here's a batch file snippet which uses junctions for publishing functions to Azure till next release.
:: login and create resources with cli
....
:: enter into a path of choice without spaces (eg. my Windows Temp folder path has no spaces)
pushd %PATH_WITHOUT_SPACES%
:: create a junction to project path here and navigate to it
mklink /J tmpdir %PROJECT_PATH_WITH_SPACES%
pushd tmpdir
:: execute publish as you would normally. It will succeed now!
call func azure functionapp publish %APP_NAME%
:: cleanup and return to old working dir
popd
rmdir tmpdir
popd

pyzmq on AWS Elastic Beanstalk

Has anyone managed to deploy the Python ZeroMQ bindings on a vanilla AWS Elastic Beanstalk instance? Specifically I am using 64bit Amazon Linux 2016.09 v2.2.0 running Python 3.4
In my requirements.txt I have pyzmq listed - however when I deploy to AWS, the logs show that the deployment is attempting first to link against an installed libzmq (there isn't one in the standard AMI image) and then once that fails, it will try to compile libzmq from scratch which fails at a step using using cc1plus which fails (I assume) as g++ is also not part of the standard AMI image.
So my question is, how do I get either libzmq or g++ to be installed on my EC2 instance on deployment?
I read somewhere you can make a .ebextensions folder inside your deployment and inside there put a "configuration file" which I attempted to do with
packages:
yum:
g++: []
However this changes nothing. However I am guessing at what to name the configuration file in that folder e.g. test.config
Or am I going about this wrong and I need to instead fiddle with the instance installing stuff like this myself and then create a custom AMI image?

Deploy Django project using wsgi and virtualenv on shared webhosting server without root access

I have a Django project which I would like to run on my shared webspace (1und1 Webspace) running on linux. I don't have root access and therefore can not edit apache's httpd.conf or install software system wide.
What I did so far:
installed squlite locally since it is not available on the server
installed Python 3.5.1 in ~/.localpython
installed virtualenv for my local python
created a virtual environment in ~/ve_tc_lb
installed Django and Pillow in my virtual environment
cloned my django project from git server
After these steps, I'm able to run python manage.py runserver in my project directory and it seems to be running (I can access the login screen using lynx on my local machine).
I read many postings on how to configure fastCGI environments, but since I'm using Django 1.9.1, I'm depening on wsgi. I saw a lot about configuring django for wsgi and virtualenv, but all examples required access to httpd.conf.
The shared web server is apache.
I can create a new directory in my home with a sample hello.py and it is working when I enter the url, but it is (of course) using the python provided by the server and not my local installation.
When I change the first line indicating which python version to use to my virtual environment (#!/path/to/home/ve_tc_lb/bin/python), it seems to use the correct version in the virtual environment. Since I'm using different systems for developing and deployment, I'm not sure whether it is a good idea to e.g. add such a line in my djangoproject/wsgi.py.
Update 2016-06-02
A few more things I tried:
I learned that I don't have access to the apache error logs
read a lot about mod_wsgi and django in various sources which I just want to share here in case someone needs them in the future:
modwsgi - IntegrationWithDjango.wiki
debug mod_wsgi installation (only applicable if you are root)
mod_wsgi configuration guide
I followed the wsgi test script installation here - but the wsgi-file is just displayed in my browser instead of beeing executed.
All in all it seems like my provider 1und1 did not install wsgi extensions (even though the support told me a week ago it would be installed)
Update 2016-06-12: I got a reply from support (after a week or so :-S ) confirming that they dont have mod_wsgi but wsgiref...
So I'm a bit stuck here - which steps should I do next?
I'll update the question regularly based on comments and remarks. Any help is appreciated.
Since your apache is shared, I don't expect you can change the httpd.conf but use instead your solution. My suggestion is:
If you have multiple servers you will deploy your project (e.g. testing, staging, production), then do the following steps for each deploy target.
In each server, create a true wsgi.py file which you will never put in versioning systems. Pretty much like you would do with a local_settings.py file. This file will be named wsgy.py since most likely you cannot edit the apache settings (since it is shared) and that name will be expected for your wsgi file.
The content for the file will be:
#!/path/to/your/virtualenv/python
from my_true_wsgi import *
Which will be different for each deploy server, but the difference will be, most likely, in the shebang line to locate the proper python interpreter.
You will have a file named my_true_wsgi to have it matching the import in the former code. That file will be in the versioning systems, unlike the wsgi.py file. The contents of such file is the usual contents of the wsgi.py on any regular django project, just that you are not using that name directly.
With this solution you can have several different wsgi files with no conflict on shebangs.
You'll have to use a webhost that supports Django. See https://code.djangoproject.com/wiki/DjangoFriendlyWebHosts. Personally, I've used WebFaction and was quite happy with it, their support was great and customer service very responsive.

How to 'pip install packages' inside Azure WebJob to resolve package compatibility issues

I am deploying a WebJob inside Azure Web App that uses Google Maps API and Azure SQL Storage.
I am following the typical approach where I make a WebJob directory and copy my 'site-packages' folder inside the root folder of the WebJob. Then I also add my code folder inside 'site-packages' and make a run.py file inside the root that looks like this:
import sys, os
sys.path.append(os.path.join(os.getcwd(), "site-packages"))
import aero2.AzureRoutine as aero2
aero2.run()
Now the code runs correctly in Azure. But I am seeing warnings after a few commands which slow down my code.
I have tried copying 'pyopenSSL' and 'requests' module into my site-packages folder, but the error persists.
However, the code runs perfectly on my local machine.
How can I find this 'pyopenSSL' or 'requests' that is compatible with the python running on Azure?
Or
How can I modify my code so that it pip installs the relevant packages for the python running on Azure?
Or more importantly,
How can I resolve this error?
#Saad,
If your webjob worked fine on Azure Web App, but you got inscuritywaring, I suggest you can try to disable the warning information via this configuration(https://urllib3.readthedocs.org/en/latest/security.html#disabling-warnings ).
Meanwhile,requests lib has some different with the high version, I recommend you refer to this document:
http://fossies.org/diffs/requests/2.5.3_vs_2.6.0/requests/packages/urllib3/util/ssl_.py-diff.html
And Azure web app used the Python 2.7.8 version which is lower than 2.7.9. So you can download the requests lib as version 2.5.3
According the doc referred in the warning message https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning:
Certain Python platforms (specifically, versions of Python earlier than 2.7.9) have restrictions in their ssl module that limit the configuration that urllib3 can apply. In particular, this can cause HTTPS requests that would succeed on more featureful platforms to fail, and can cause certain security features to be unavailable.
So the easiest way fix this warning, is to upgrade the python version of the Azure Web Apps. Login the Azure manager portal, change the python version to 3.4 in Application settings column:
As I test in webjob task to use requests module to request a "https://" url, and since upgrade python version to 3.4, there are no more warnings.
I followed this article and kind of 'pip installed' the pymongo library for my script. Not sure if it works for you but here are the steps:
Make sure you include the library name and version in the requirements.txt
Deploy the web app using Git. The directory should include at least
requirements.txt (only to install whatever is in requirements.txt in the virtual environment, which is shared with Web App in D:\home\site\wwwroot\env\Lib\site-packages)
add this block of code to the Python code you want to use in the WebJob zip file.
import sys
sitepackage = "D:\home\site\wwwroot\env\Lib\site-packages"
sys.path.append(sitepackage)

Categories