I am learning how to use Azure functions and using my web scraping script in it.
It uses BeautifulSoup (bs4) and pymysql modules.
It works fine when I tried it locally in the virtual environment as per this MS guide:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-first-azure-function-azure-cli?pivots=programming-language-python&tabs=cmd%2Cbrowser#run-the-function-locally
But when I create the function App and publish the script to it, Azure Functions logs give me this error:
Failure Exception: ModuleNotFoundError: No module named 'pymysql'.
It must happen when attempting to import it.
I really don't know how to proceed, where should I specify what modules it needs to install?
You need to check if you have generated the requirements.txt which includes all of the information of the modules. When you deploy the function to azure, it will install the modules by the requirements.txt automatically.
You can generate the information of modules in requirements.txt file by the command below in local:
pip freeze > requirements.txt
And then deploy the function to azure by running the publish command:
func azure functionapp publish hurypyfunapp --build remote
For more information about deploy python function from local to auzre, please refer to this tutorial.
By the way, if you use consumption plan for your python function, the "Kudu" is not available for us. If you want to use "Kudu", you need to create app service plan for it but not consumption plan.
Hope it helps~
You need to upload the installed modules when deploying to azure. You can upload them using Kudu:
https://github.com/projectkudu/kudu/wiki/Kudu-console
as an alternative, you can also use Kudu and run pip install using the console:
Install python packages from the python code itself with the following snippet: (Tried and verified on Azure functions)
def install(package):
# This function will install a package if it is not present
from importlib import import_module
try:
import_module(package)
except:
from sys import executable as se
from subprocess import check_call
check_call([se,'-m','pip','-q','install',package])
for package in ['beautifulsoup4','pymysql']:
install(package)
Desired libraries mentioned the list gets installed when the azure function is triggered for the first time. for the subsequent triggers, you can comment/ remove the installation code.
Related
Hello I want to install a module named python-ldap locally in the same directory as my main so that it could be zipped and uploaded as a standalone function. The reason is AWS Lambda doesn't support installing this module (but i have installed it successfully on AmazonLinux). So I'm hoping i can install the module in an AmazonLinux instance and zip it so it runs on any instance. If its possible that is.
For example purposes i have a folder deploy-ldap with a single lambda_function.py inside.
The lambda_function.py simply imports the module like so:
import ldap
def main():
print("Success")
What I tried so far:
There are some resources on this suggesting to copy a single .so file but it didn't work for me and resulted in an error where another .so.2 file is being requested
Furthermore i tried installing the module with pip install python-ldap -t . but this also resulted in an error: "Unable to import module 'lambda_function': No module named '_ldap'"
All input appreciated, thank you. ^^
The import python-ldap is incorrect because module names in Python shouldn't contain dashes. The correct import as in the examples should be:
import ldap
Then to make it available in the environment of your AWS Lambda function, you should follow the same steps as documented in Deployment package with dependencies which consists of the following steps:
Prepare the file containing the code
Perform pip install python-ldap
Deploy the code along with the installation to AWS Lambda
Trying to create a basic Python Function and use it in Azure Function App(Consumption based). Used the HTTP Template via VS Code and able to use and get it deployed on Azure. However when I try to use "Pandas" in the logic, I get the error which I am not able to rectify. Me being a rookie in Python. Can you suggest how to rectify ?
Tool Used : VS Code , Azure Functions Tools
Python version installed locally : 3.8.5
Azure Function App Python Version : 3.8
It seems the pandas module hasn't been installed in your function on azure. You need to add the pandas module into your local requirements.txt and then deploy the function from local to azure. It will install the modules according to the lines in requirements.txt.
You can run this command in "Terminal" window to generate the pandas line in your requirements.txt automatically.
pip freeze > requirements.txt
After running the command above, your requirements.txt should be like:
I am trying to connect Firebase with an AWS Lambda. I am using their firebase-admin sdk. I have installed and created the dependancy package as described here. But I am getting this error on Lambda:
Unable to import module 'index':
Failed to import the Cloud Firestore library for Python.
Make sure to install the "google-cloud-firestore" module.
I have previously also tried setting up a similar function using node.js but I received an error message because GRPC was not configured. I think that this error message might be stemming from that same problem. I don't know how to fix this. I have tried:
pip install grpcio -t path/to/...
and installing google-cloud-firestore, but neither fixed the problem. When I run the code from my terminal, I get no errors.
Part of the problem here is that grpcio compiles a platform specific dynamic module: cygrpc.cpython-37m-darwin.so (in my case). According to this response you cannot import dynamic modules in a zip file: https://stackoverflow.com/a/58140801
Updating to python 3.8 fix this for me
As Alex DeBrie mentioned in his article on serverless.com,
The plugins section registers the plugin with the Framework. In the custom section, we tell the plugin to use Docker when installing packages with pip. It will use a Docker container that's similar to the Lambda environment so the compiled extensions will be compatible. You will need Docker installed for this to work.
Which means, the environment is different between Local and Lambda, so the compiled extensions would differ. If use a container to contain packages installed by pip, the container would mimic the environment of Lambda, then everything would run well.
If you use Serverless Frame work to deploy your Python app to AWS Lambda, add these lines to serverless.yml file:
...
plugins:
- serverless-python-requirements
...
custom:
pythonRequirements:
dockerizePip: non-linux
dockerImage: mlupin/docker-lambda:python3.9-build
...
then serverless-python-requirements would automatically open a Docker container based on mlupin/docker-lambda:python3.9-build image.
This container would mimic the Lamda environment, let pip install and compile everything in it. So the compiled extensions will be compatible.
This worked in my case. Hope this helps.
I am creating a Python Web App in Google App engine.
When I
sudo pip install
a third party library and then try to import it, I get the error 'ImportError: No module named x'. Where x is the name of that library. In my case as an example: Boto, Boto3, Fask etc.
If I go into shell in GAE and type python >> import X the library can be used inside the python environment. When deploying the app though or running the app in the virtaul server in Google App Engine I get the module import error.
I even tried methods like: python >> import sys >> sys.path.insert(0, "path_here")
export PYTHONPATH and selected where those libraries are located
I even followed several Q&A here in Stackoverflow without any success, can somebody please give me a proper way to fix the import error in Google App Engine?
FYI
I am not using any local environment in my pc, I am working directly through the GAE bash console, the launch code editor in GAE and I am running the command dev_appserver.py $PWD
When I do
pip freeze
I can see that the modules are currently installed and deployed on the GAE virtual environment. Is there a problem with my path? What's the best approach to make GAE load my already installed third party libraries.
UPDATE:
Importing the library directly on the python shell from Google App Engine works just fine. Importing the library on my python app index.py file results in the error.
Python import directly from Shell
Python import to the index.py file
Though this is an old thread, adding this answer now:
Run command : gcloud components list
This will show the different components installed and not in your environment.
Install app-engine-python components if not installed:
gcloud components install app-engine-python
gcloud components install app-engine-python-extras
If it doesn't work:
In Windows, uninstall and download and install Google-sdk (check the python version you need). Delete all the files the installer ask you to delete in the last step and run the gcloud component commands again.
I'm trying to install and run pandas on an Amazon Lambda instance. I've used the recommended zip method of packaging my code file model_a.py and related python libraries (pip install pandas -t /path/to/dir/) and uploaded the zip to Lambda. When I try to run a test, this is the error message I get:
Unable to import module 'model_a': C extension:
/var/task/pandas/hashtable.so: undefined symbol: PyFPE_jbuf not built.
If you want to import pandas from the source directory, you may need
to run 'python setup.py build_ext --inplace' to build the C extensions
first.
Looks like an error in a variable defined in hashtable.so that comes with the pandas installer. Googling for this did not turn up any relevant articles. There were some references to a failure in numpy installation but nothing concrete. Would appreciate any help in troubleshooting this! Thanks.
I would advise you to use Lambda layers to use additional libraries. The size of a lambda function package is limited, but layers can be used up to 250MB (more here).
AWS has open sourced a good package, including Pandas, for dealing with data in Lambdas. AWS has also packaged it making it convenient for Lambda layers. You can find instructions here.
I have successfully run pandas code on lambda before. If your development environment is not binary-compatible with the lambda environment, you will not be able to simply run pip install pandas -t /some/dir and package it up into a lambda .zip file. Even if you are developing on linux, you may still run into compatability issues.
So, how do you get around this? The solution is actually pretty simple: run your pip install on a lambda container and use the pandas module that it downloads/builds instead. When I did this, I had a build script that would spin up an instance of the lambci/lambda container on my local system (a clone of the AWS Lambda container in docker), bind my local build folder to /build and run pip install pandas -t /build/. Once that's done, kill the container and you have the lambda-compatible pandas module in your local build folder, ready to zip up and send to AWS along with the rest of your code.
You can do this for an arbitrary set of python modules by making use of a requirements.txt file, and you can even do it for arbitrary versions of python by first creating a virtual environment on the lambci container. I haven't needed to do this for a couple of years, so maybe there are better tools by now, but this approach should at least be functional.
If you want to install it directly through the AWS Console, I made a step-by-step youtube tutorial, check out the video here: How to install Pandas on AWS Lambda