Error deploying Python package to AWS Lambda - python

AWS Successfully creates the lambda function when I upload the zip file. But it's giving this error when I test it out.
{
"errorMessage": "Unable to import module 'amazonSNS'"
}
Following are the contents of the zip file that I created. I tried changing the name of the zip file to "amazonSNS" to match the amazonSNS.py file, but no help, same issue.
The Lambda handler in the Configuration of the Lambda function is set to "amazonSNS.handler" where amazonSNS is the filename and handler is the function name that needs to be called, as they have instructed in the documentation.
Here are the contents of the python file
import boto3
import MySQLdb
client = boto3.client("sns")
rds = boto3.client("rds")
def handler(event, context):
conn = MySQLdb.connect("host", "username", "password", "database")
cur = conn.cursor(MySQLdb.cursors.DictCursor)
query = "select * from login.login limit 10"
cur.execute(query)
print cur.fetchall()
print conn
What might be the issue here?
Here is the Log output
START RequestId: 76a61551-052a-11e6-b466-8fa0769ac309 Version: $LATEST
Unable to import module 'amazonSNS': No module named _mysql
END RequestId: 76a61551-052a-11e6-b466-8fa0769ac309 REPORT RequestId:
76a61551-052a-11e6-b466-8fa0769ac309 Duration: 0.33 ms Billed
Duration: 100 ms
UPDATE
I added a few more files from "site-package" folder that I thought was part of the MySQLdb package, Here are the current contents of the zip file.
And after this the new Log of the error is.
START RequestId: c0715d9a-0531-11e6-9409-a3b194fd4afd Version: $LATEST
Unable to import module 'amazonSNS': libmysqlclient.so.18: cannot open
shared object file: No such file or directory
END RequestId: c0715d9a-0531-11e6-9409-a3b194fd4afd REPORT RequestId:
c0715d9a-0531-11e6-9409-a3b194fd4afd Duration: 0.35 ms Billed
Duration: 100 ms Memory Size: 128 MB Max Memory Used: 22 MB

To solve this issue:
I searched for libmysqlclient.so.20 (the version number at the end may differ)
find /. -name "libmysqlclient.so.20"
My ouput was
/./usr/lib/x86_64-linux-gnu/libmysqlclient.so.20
I then copied that file into the root directory of my package
cp /usr/lib/x86_64-linux-gnu/libmysqlclient.so.20 <your package path>

I failed to compile MySQL-python in way to make it work in Lambda. Instead I switched to pymysql. I am not sure about performance, but at least this works.
P.S. I wonder why there is no official recommendations about suggested MySQL driver on amazon. At least I haven't found it.

I had this issue when using mysqlclient (the MySQLd fork which works on Python3).
Since I use Zappa for easy deployment, the solution was simple: just switch to the original MySQLd package (which does not support Python 3, though): pip install mysql-python
Zappa comes with a pre-compiled version of it.

How did you install MySQLdb? http://mysql-python.sourceforge.net/FAQ.html says:
ImportError: No module named _mysql
If you see this, it's likely you did some wrong when installing MySQLdb; re-read (or read) README. _mysql is the low-level C module that interfaces with the MySQL client library.
Install MySQLdb with pip if you didn't already.

Related

Lambda can't import the module pyminizip even though its already in the directory

So I have this similar problem with this person.
How to create password encrypted zip file in python through AWS lambda
We have the exact same problem but i already did everything from the answers in that thread but to no avail.
I have a lambda script that runs on python3.9 I need to compress the files in my s3 as a zip file that is password protected and i need to put it in another s3.
This is how it goes
import pyminizip
def zip_to_client():
# reportTitles = os.listdir(tempDir)
dateGenerated = datetime.now(tz=atz).strftime("%Y-%m-%d")
pyminizip.compress("Daily_Booking_Report.csv", subfolder + str(dateGenerated) +'/'+str(id)+'/'
, "/tmp/test.zip", "awesomepassword", 9)
s3 = boto3.resource('s3')
s3.meta.client.upload_file(Filename = '/tmp/test.zip', Bucket = bucket, Key = subfolder + 'test.zip', ExtraArgs={'Tagging':'archive=90days'})
print("SUCCESS: Transferred report into S3")
i'm not sure if it works but i can't debug it because lambda shows me the error:
Response
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'pyminizip'",
"errorType": "Runtime.ImportModuleError",
"requestId": "0000111000",
"stackTrace": []
}
I made sure that i put import pyminizip as well as pip installing it in the directory.
pip install pyminizip -t .
so far this is what the lambda directory looks like
https://ibb.co/ZGmLBbv
i've tried everything from putting it in a lambda layer to pip installing different versions from python 3.7 to 3.9
This is a common case when you create a lambda layer and get import error. And this occurs when you don't have created python files in a defined directory like python/python38/site-packages...
or
second reason might be a dependency is missing. In that use use docker and follow steps from here : https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/.

Unable to run pytest command in a lambda function on AWS

I have the following lambda function on AWS
import os
import sys
sys.path.insert(0, '/opt')
def tc1(event, context):
print("in tc1")
os.system("pytest /tests/Test_preRequisites.py -s -v")
os.system("python -m pytest /tests/Test_preRequisites.py -s -v")
when I run this function, the following error is displayed
Response:
null
Request ID:
"8e8738b7-9h28-4379-b814-688da8c31d58"
Function logs:
START RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Version: $LATEST
in tc1
sh: pytest: command not found
/var/lang/bin/python: No module named pytest
END RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58
REPORT RequestId: 8e8738b7-9h28-4379-b814-688da8c31d58 Duration: 38.46 ms Billed Duration: 100 ms Memory Size: 2048 MB Max Memory Used: 57 MB Init Duration: 123.66 ms
I can understand that the lambda function is unable to find the pytest module from these errors sh: pytest: command not found and /var/lang/bin/python: No module named pytest
I have tried to run the pytest command and also the python -m pytest command, both both give the same error.
However, I have already added a zip file as a layer and added that layer to this lambda function.
I installed pytest on my local machine to a folder by the command pip install pytest -t C:\Users\admin\dependencies
and then zipped the contents of that folder and uploaded it to the layer on AWS.
Still I am unable to access the pytest module.
This works perfectly fine on my local machine on local environment. This issue is occurring for AWS lambda only, so the script is working fine.
Can anyone please let me know what needs to be added or modified here to get this working.
Thanks.
Place your dependencies in a 'python' directory for Python layers, like this:
pip install pytest -t C:\Users\admin\dependencies\python
then zip up the contents of the 'dependencies' folder as before. The zip file will contain a single directory, 'python' with your dependencies under it.
This is because there's no entry point in the Lambda environment. When you install pytest normally, you get a pytest script due to the project's options.entry_points value in its setup.cfg ( found here: https://github.com/pytest-dev/pytest/blob/main/setup.cfg )
If you install the package into a virtualenv and navigate to the /bin directory, you'll see a pytest script sitting in there. That's what's normally being executed when you invoke the pytest command on the CLI. Your Lambda needs a version of that script, if you want to be able to shell out to it.
For reference, here's what's in that script:
#!/path/to/venv/bin/python
# -*- coding: utf-8 -*-
import re
import sys
from pytest import console_main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(console_main())
I have not verified it myself, but I suspect that changing the shebang to #!/usr/bin/env python in this script would cause it to work from within the Lambda. Also, note that since your dependencies typically end up dumped into the same directory as your code in a Lambda package, you may need to use a different name for the script (because the name pytest is already used by a directory)

Cannot import sqlalchemy within AWS Lambda function

I know the concept of using a deployment package is relatively straightforward, but I've been banging my head on this issue for the last few hours. I am following the documentation from AWS on packaging up Lambda dependencies. I want to write a simple Lambda function to update an entry in a PostgreSQL table upon some event.
I first make a new directory to work in:
mkdir lambdas-deployment && cd lambdas-deployment
Then I make a new virtual environment and install my packages:
virtualenv v-env
source v-env/bin/activate
pip3 install sqlalchemy boto3 psycopg2
My trigger-yaml-parse.py function (it doesn't actually use the sqlalchemy library yet, but I'm just trying to import it successfully):
import logging
import json
import boto3
import sqlalchemy
def lambda_handler(event, context):
records = event['Records']
s3_records = filter(lambda record: record['eventSource'] == 'aws:s3', records)
object_created_records = filter(lambda record: record['eventName'].startswith('ObjectCreated'), s3_records)
for record in object_created_records:
key = record['s3']['object']['key']
print(key)
I've been following the instructions in the AWS documentation.
zip -r trigger-yaml-parse.zip $VIRTUAL_ENV/lib/python3.6/site-packages/
I then add in my function code:
zip -g trigger-yaml-parse.zip trigger-yaml-parse.py
I get an output of updating: trigger-yaml-parse.py (deflated 48%).
Then I upload my new zipped deployment to my S3 build bucket:
aws s3 cp trigger-yaml-parse.zip s3://lambda-build-bucket
I choose upload from S3 in the AWS Lambda console:
However, my Lambda function fails upon execution with the error:
START RequestId: 396c6c3c-3f5b-4df9-b7f1-057842a87eb3 Version: $LATEST
Unable to import module 'trigger-yaml-parse': No module named 'sqlalchemy'
What am I doing wrong? I've followed the documentation from AWS literally step for step.
I think your problem might be in this line:
zip -r trigger-yaml-parse.zip $VIRTUAL_ENV/lib/python3.6/site-packages/
When you create the zip file the compressed files will have the complete path you had in your disk. The python runtime in lambda will not be able to find the libraries.
Instead you should do something like this
cd $VIRTUAL_ENV/lib/python3.6/site-packages/
zip -r /full/path/to/trigger-yaml-parse.zip .
Run unzip -t against both files and you will see the difference.
from AWS documentation:
"Zip packages uploaded with incorrect permissions may cause execution
failure. AWS Lambda requires global read permissions on code files and
any dependent libraries that comprise your deployment package"
So you can use zip info to check permissions:
zipinfo trigger-yaml-parse.zip
-r-------- means only the file owner has permissions.

AWS Lambda python library function error

I'm trying to connect to AWS RDS using AWS Lambda. I installed PyMySQL to the directory and built the package with the code below and the libraries
import sys
import pymysql
def lambda_handler(event, context):
string=""
try:
connection = pymysql.connect(host='',
user='',
password='',
db='',
charset='',
cursorclass=pymysql.cursors.DictCursor)
cur = connection.cursor(pymysql.cursors.DictCursor)
cur.execute("select * from table")
for row in cur:
print(row['col'])
string+=row['col']
except Exception as e:
print("MySQL error: %s" % (e.args[0]))
return string
print(lambda_handler("",""))
In my machine, the code above works, but in AWS, it displays
MySQL error: module 'pymysql' has no attribute 'connect'
I checked that pymysql is only available in the directory that has the code, so I don't know why I'm not able to use the connect method.
Both Python versions are the same.
EDIT:
Traceback (most recent call last):
File "/var/task/lambda.py", line 7, in lambda_handler
connection = pymysql.connect(host='',
AttributeError: module 'pymysql' has no attribute 'connect'
Try zip -r package.zip *
I suspect you are zipping only the top level of the pymysql module, not the contents of its subdirectories
The AWS documentation for uploading to lambda is pretty poor.
First create a directory in your local machine eg: "package-dir"
Now install pymlsql to your created directory path "pip install pymlsql -t path/to/package-dir"
Paste you python script to the same dirctory
Select all the items inside the directory and create a zip file. Do not zip the directory itself this is very important
Upload the zip file in lambda and it should work
Also see that the handler name is "python_script_name.lambda_handler".
Eg: if your script file name is "lambda_function.py" then your handler should be "lambda_function.lambda_handler"

pyhive, sqlalchemy can not connect to hadoop sandbox

I have installed,
pip install thrift
pip install PyHive
pip install thrift-sasl
and
since pip install sasl failed I downloaded sasl‑0.2.1‑cp27‑cp27m‑win_amd64.whl file and installed it in my Windows 8.1 PC.
Then I wrote this code,
from pyhive import hive
cursor = hive.connect('192.168.1.232', port=10000, auth='NONE')
cursor.execute('SELECT * from sample_07 LIMIT 5',async=True)
print cursor.fetchall()
this gives the error:
Traceback (most recent call last):
File "C:/DigInEngine/scripts/UserManagementService/fd.py", line 37, in <module>
cursor = hive.connect('192.168.1.232', port=10000, auth = 'NONE')
File "C:\Python27\lib\site-packages\pyhive\hive.py", line 63, in connect
return Connection(*args, **kwargs)
File "C:\Python27\lib\site-packages\pyhive\hive.py", line 104, in __init__
self._transport.open()
File "C:\Python27\lib\site-packages\thrift_sasl\__init__.py", line 72, in open
message=("Could not start SASL: %s" % self.sasl.getError()))
thrift.transport.TTransport.TTransportException: Could not start SASL: Error in sasl_client_start (-4) SASL(-4): no mechanism available: Unable to find a callback: 2
and this code gives,
from sqlalchemy import create_engine
engine = create_engine('hive://192.168.1.232:10000/default')
try:
connection = engine.connect()
except Exception, err:
print err
result = connection.execute('select * from sample_07;')
engine.dispose()
this error,
Could not start SASL: Error in sasl_client_start (-4) SASL(-4): no
mechanism available: Unable to find a callback: 2
I have downloaded Hortonworks sandbox from here and use it in a separate server.
NOTE: I went through this as well but the accepted answer is not working for me, because importing ThriftHive from hive gives Import error although I have pip installed hive. So I decided to use pyhive or sqlalchemy
How can I connect to hive and execute a query easily?
Here are steps to build SASL on Windows, but your mileage may vary: A lot of this depends on your particular system's paths and available libraries.
Please also note that these instructions are specific to Python 2.7 (which I see you are using from the paths in your question).
The high-level overview is that you're installing this project: https://github.com/cyrusimap/cyrus-sasl. In order to do that, you have to use the legacy C++ compiler that was used to build Python 2.7. There are a couple of other steps to getting this to work.
Pre-build Steps:
Install Microsoft Visual C++ Compiler for Python 2.7. Use the default installation paths - take note of where it got installed for the next 2 steps (2 options are included in the list below)
Copy this file to whichever of the include locations is appropriate for your install
Make a unistd.h file from this answer in the same include directory
Build steps:
git clone https://github.com/cyrusimap/cyrus-sasl
Open the "VS2013 x64 Native Tools Command Prompt" that's installed with the Compiler from step 1
Change directory to the directory created by step 4, then the lib sub-directory
nmake /f ntmakefile STATIC=no prefix=C:\sasl64
nmake /f ntmakefile prefix=C:\sasl64 STATIC=no install see note below
copy /B C:\sasl64\lib\libsasl.lib /B C:\sasl64\lib\sasl2.lib
pip install thrift_sasl --global-option=build_ext \
--global-option=-IC:\\sasl64\\include \
--global-option=-LC:\\sasl64\\lib
'Include' locations:
"C:\Program Files (x86)\Common Files\Microsoft\Visual C++ for Python\9.0\VC\include\stdint.h"
"%USERPROFILE%\AppData\Local\Programs\Common\Microsoft\Visual C++ for Python\9.0\VC\include"
Here's a reference to these same steps, with some additional annotations and explanations: http://java2developer.blogspot.co.uk/2016/08/making-impala-connection-from-python-on.html.
Note
The referenced instructions also executed step (8) in the include and win32\include sub-directories, you may have to do that as well.
While using pyhive no authentication can be passed as auth="NOSASL", instead of "None", so your code should look like this:
from pyhive import hive
cursor = hive.connect('192.168.1.232', port=10000, auth='NOSASL')
cursor.execute('SELECT * from sample_07 LIMIT 5',async=True)
print cursor.fetchall()

Categories