When I run a python script that I set up on WebJobs in Azure - I get the following error:
import MySQLdb
ImportError: No module named MySQLdb
job failed due to exit code 1
I found some articles that seem to suggest to install python modules to a directory created on the webapp. How/Where would I install those modules?
Have you tried this?
http://nicholasjackson.github.io/azure/python/python-packages-and-azure-webjobs/
(from the site):
Step 1.
If you are using OSX and the default Python 2.7 install your packages
installed with pip will be in /usr/local/lib/python2.7/site-packages,
create a folder called site-packages in the root of your python job
and copy any packages you need for your job into it.
Step 2
Next you need to modify your run.py or any other file which requires
access to the package files. At the top of the file add….
import sys
sys.path.append("site-packages")
Related
Hello I want to install a module named python-ldap locally in the same directory as my main so that it could be zipped and uploaded as a standalone function. The reason is AWS Lambda doesn't support installing this module (but i have installed it successfully on AmazonLinux). So I'm hoping i can install the module in an AmazonLinux instance and zip it so it runs on any instance. If its possible that is.
For example purposes i have a folder deploy-ldap with a single lambda_function.py inside.
The lambda_function.py simply imports the module like so:
import ldap
def main():
print("Success")
What I tried so far:
There are some resources on this suggesting to copy a single .so file but it didn't work for me and resulted in an error where another .so.2 file is being requested
Furthermore i tried installing the module with pip install python-ldap -t . but this also resulted in an error: "Unable to import module 'lambda_function': No module named '_ldap'"
All input appreciated, thank you. ^^
The import python-ldap is incorrect because module names in Python shouldn't contain dashes. The correct import as in the examples should be:
import ldap
Then to make it available in the environment of your AWS Lambda function, you should follow the same steps as documented in Deployment package with dependencies which consists of the following steps:
Prepare the file containing the code
Perform pip install python-ldap
Deploy the code along with the installation to AWS Lambda
I have a python package (created in PyCharm) that I want to run on Azure Databricks. The python code runs with Databricks from the command line of my laptop in both Windows and Linux environments, so I feel like there are no code issues.
I've also successfully created a python wheel from the package, and am able to run the wheel from the command line locally.
Finally I've uploaded the wheel as a library to my Spark cluster, and created the Databricks Python object in Data Factory pointing to the wheel in dbfs.
When I try to run the Data Factory Pipeline, it fails with the error that it can't find the module that is the very first import statement of the main.py script. This module (GlobalVariables) is one of the other scripts in my package. It is also in the same folder as main.py; although I have other scripts in sub-folders as well. I've tried installing the package into the cluster head and still get the same error:
ModuleNotFoundError: No module named 'GlobalVariables'Tue Apr 13 21:02:40 2021 py4j imported
Has anyone managed to run a wheel distribution as a Databricks Python object successfully, and did you have to do any trickery to have the package find the rest of the contained files/modules?
Your help greatly appreciated!
Configuration screen grabs:
We run pipelines using egg packages but it should be similar to wheel. Here is a summary of the steps:
Build the package with with python setup.py bdist_egg
Place the egg/whl file and the main.py script into Databricks FileStore (dbfs)
In Azure DataFactory's Databricks Activity go to the Settings tab
In Python file, set the dbfs path to the python entrypoint file (main.py script).
In Append libraries section, select type egg/wheel set the dbfs path to the egg/whl file
Select pypi and set all the dependencies of your package. It is recommended to specify the versions used in development.
Ensure GlobalVariables module code is inside the egg. As you are working with wheels try using them in step 5. (never tested myself)
I have a Python 2.7 script that uses BeautifulSoup4 and requests modules.
The issue is, that I need to deploy this script on a machine to which we can not directly install any new modules/libaries via pip install or anything else.
We can copy this script and any files it needs to run to that machine, but we can not directly install any modules.
I have tried PyInstaller, PEX and Nuitka to create an executable file or a bundle (in any format, for example .zip) so that we can copy the entire file or bundle into the machine and run the python script from there, without the need to do pip install or installing the modules manually via Wheel file. All without success.
Environment details:
Target machine on which the script needs to run: RHEL-based Linux OS with Python 2.7.
My development machine: Windows 10 but I also have access to Fedora Linux machine both with Python 3 and Python 2.7.
The import section of my script looks like this:
from __future__ import with_statement
from __future__ import absolute_import
import requests
import re
from bs4 import BeautifulSoup
from io import open
Can someone, please, help me out here?
We have the script ready to be deployed, but we are not able to run it in our target machine because of the missing modules/libraries.
Thank you very much
EDIT:
Mentioning this since it may not be clear at first - we do not have the issue with a network connection or anything of this gender. We were prohibited to use pip install or a manual method of installing a module. Therefore, we can only bundle the modules directly with the script or something so that would not need to directly install the modules on the target machine itself.
I am using python 2.7.13 and
I am facing problems importing ruamel.yaml when I install it in a custom directory.
**ImportError: No module named ruamel.yaml**
The command used is as follows:
pip install --target=Z:\XYZ\globalpacks ruamel.yaml
I have added this custom directory to PYTHONPATH env variable
and also have a .pth file in this location with the following lines
Z:\XYZ\globalpacks\anotherApp
Z:\XYZ\globalpacks\ruamel
There is another app installed similarly with the above settings
and it works.
What am I missing here?
PS: It works when I install in site-packages folder
also it worked in the custom folder when I created an init.py file
in the ruamel folder.
EDIT:
Since our content creation software uses python 2.7 we are restricted to
using the same.We have chosen to install the same version of python on all
machines and set import paths to to point to modules/apps locacted on shared
network drive.
Like mentioned it works in pythons site-packages but not on the network drive
which is on the PYTHONPATH env-variable.
The ruamel.yaml-**.nspkg.pth and ruamel.ordereddict-*-nspkg.pth are
dutifully installed.Sorry for not giving complete details earlier.Your inputs
are much appreciated.
I'm trying to install the vimpdb lib but it's working, even though I successfully installed vimpdb using pip install I always get this error:
import vimpdb; vimpdb.set_trace();
ImportError: No module named vimpdb
I'm running the code locally but when I run the same code as a simple script (without using localhost) it imports correctly, it only throws an error when I start a server and begin to try using this plugin.
Any ideas?
Thansk!
App Engine won't import Python modules on your Python path. You need to actually include the module within the App Engine project.
For example, in the same directory as app.yaml, you could add a symbolic link similar to this:
vimpdb -> /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/vimpdb
Or you could copy the vimpdb directory to that location.