I have an AWS Lambda function that uses oauth2client and SignedJwtAssertionCredentials.
I have installed my requirements locally (at the root) of my Lambda function directory.
requirements.txt
boto3==1.2.5
gspread==0.3.0
oauth2client==1.5.2
pyOpenSSL==0.15.1
pycrypto==2.6.1
My lambda function looks like:
import boto3
import gspread
from oauth2client.client import SignedJwtAssertionCredentials
def lambda_handler(event, context):
dynamodb = boto3.resource('dynamodb')
scope = ['https://spreadsheets.google.com/feeds']
private_key = "!--some-private-key"
google_email = "some-email"
credentials = SignedJwtAssertionCredentials(google_email, private_key, scope)
gc = gspread.authorize(credentials)
However, when running this, I get the following stack trace:
{
"stackTrace": [
[
"/var/task/lambda_function.py",
20,
"lambda_handler",
"credentials = SignedJwtAssertionCredentials(google_email, private_key, scope)"
],
[
"/var/task/oauth2client/util.py",
140,
"positional_wrapper",
"return wrapped(*args, **kwargs)"
],
[
"/var/task/oauth2client/client.py",
1630,
"__init__",
"_RequireCryptoOrDie()"
],
[
"/var/task/oauth2client/client.py",
1581,
"_RequireCryptoOrDie",
"raise CryptoUnavailableError('No crypto library available')"
]
],
"errorType": "CryptoUnavailableError",
"errorMessage": "No crypto library available"
}
From everything I've read online, I am told that I need to install pyopenssl. However, I already have that installed and pycrypto.
Is there something I'm missing?
Looks like this is a bit old of a question, but if you are still looking for an answer:
This occurs because one or more of the dependencies for pyopenssl is a native package or has native bindings (cryptography is a dependency of pyopenssl and has a dependency on libssl) that is not compiled for the target platform.
Unfortunately the process varies for how to get compiled versions. The simplest way (which works only if its a different in the platforms, not missing .so libraries) is to:
Create an ec2 host (use t2.micro and the AWS AMI Image)
Install python and virtualenv
Create a virtual env
Install your target library
Zip up the virtualenv virtualenv/site-packages and virtualenv/dist-packages and move them off the machine
Discard the machine image
This zip will then need to be expanded into your lambda zip before uploading. The result will be the required packages residing at the root of your zip file (not in site-packages or dist-packages folders)
For simple dependencies this works, if you require native libraries as well (such as for Numpy or Scipy) you will need to take more elaborate approaches such as the ones outlined here: http://thankcoder.com/questions/jns3d/using-moviepy-scipy-and-numpy-in-amazon-lambda
Related
So I have this similar problem with this person.
How to create password encrypted zip file in python through AWS lambda
We have the exact same problem but i already did everything from the answers in that thread but to no avail.
I have a lambda script that runs on python3.9 I need to compress the files in my s3 as a zip file that is password protected and i need to put it in another s3.
This is how it goes
import pyminizip
def zip_to_client():
# reportTitles = os.listdir(tempDir)
dateGenerated = datetime.now(tz=atz).strftime("%Y-%m-%d")
pyminizip.compress("Daily_Booking_Report.csv", subfolder + str(dateGenerated) +'/'+str(id)+'/'
, "/tmp/test.zip", "awesomepassword", 9)
s3 = boto3.resource('s3')
s3.meta.client.upload_file(Filename = '/tmp/test.zip', Bucket = bucket, Key = subfolder + 'test.zip', ExtraArgs={'Tagging':'archive=90days'})
print("SUCCESS: Transferred report into S3")
i'm not sure if it works but i can't debug it because lambda shows me the error:
Response
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'pyminizip'",
"errorType": "Runtime.ImportModuleError",
"requestId": "0000111000",
"stackTrace": []
}
I made sure that i put import pyminizip as well as pip installing it in the directory.
pip install pyminizip -t .
so far this is what the lambda directory looks like
https://ibb.co/ZGmLBbv
i've tried everything from putting it in a lambda layer to pip installing different versions from python 3.7 to 3.9
This is a common case when you create a lambda layer and get import error. And this occurs when you don't have created python files in a defined directory like python/python38/site-packages...
or
second reason might be a dependency is missing. In that use use docker and follow steps from here : https://www.geeksforgeeks.org/how-to-install-python-packages-for-aws-lambda-layers/.
When I check the cloud watch logs of my Lambda function, I see theses errors:
[ERROR] Runtime.ImportModuleError: Unable to import module 'trigger_bitbucket_pipeline_from_s3': No module named 'requests'
File structure:
/bin
--trigger_bitbucket_pipeline_from_s3.zip
/src
--trigger_bitbucket_pipeline_from_s3.py
--/requests (lib folder)
lambda.tf
Lambda.tf:
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "${path.module}/src/trigger_bitbucket_pipeline_from_s3.py"
output_file_mode = "0666"
output_path = "${path.module}/bin/trigger_bitbucket_pipeline_from_s3.zip"
}
resource "aws_lambda_function" "processing_lambda" {
filename = data.archive_file.lambda_zip.output_path
function_name = "triggering_pipleline_lambda"
handler = "trigger_bitbucket_pipeline_from_s3.lambda_handler"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
role = aws_iam_role.processing_lambda_role.arn
runtime = "python3.9"
}
My lambda function in src/trigger_bitbucket_pipeline_from_s3.py is pretty straightforward for now:
import logging
import requests
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
logger.info(f'## EVENT: {event}')
return {
'statusCode': 200,
}
What am I doing wrong? I have already double checked file names.
That is because there is no module named 'requests' in lambda, remembrer lambda is serverless so you need to configure all your dependencies before you run it.
One way to solve this is to install that dependency locally in your project:
pip install requests -t ./
Then create again the .zip file (with the dependency in it) and upload to your lambda function.
And other way to solve it is to use a custom layer in AWS lambda that contains the relevant 'requests' site-packages you require. Example:
https://dev.to/razcodes/how-to-create-a-lambda-layer-in-aws-106m
You typically receive this error when your Lambda environment can't find the specified library in the Python code. This is because Lambda isn't prepackaged with all Python libraries.
To resolve this error, create a deployment package or Lambda layer that includes the libraries that you want to use in your Python code for Lambda.
Make sure that you put the library that you import for Python inside the /python folder.
In your local environment install all library files into the python folder by running the following:
pip install librarywhatyouneed -t python/
There are dependencies to create all python libraries prepackaged and zip the python with all dependencies and put into the layer associated with add Layer lambda created on AWS.
I wanted to install some python packages (eg: python-json-logger) on Serverless Dataproc. Is there a way to do an initialization action to install python packages in serverless dataproc? Please let me know.
You have two options:
Using command gcloud in terminal:
You can create a custom image with dependencies(python packages) in the GCR(Google Container Registry GCP) and add uri as parameter in the command below:
e.g.
$ gcloud beta dataproc batches submit
--container-image=gcr.io/my-project-id/my-image:1.0.1
--project=my-project-id --region=us-central1
--jars=file:///usr/lib/spark/external/spark-avro.jar
--subnet=projects/my-project-id/regions/us-central1/subnetworks/my-
subnet-name
To create custom container image for Dataproc Serveless for Spark.
Using operator DataprocCreateBatchOperator of airflow:
Add to python-file the script below, it will install the desired package and then load this package into the container path (dataproc servless), this file must be saved in a bucket, this uses the secret manager package as an example.
python-file.py
import pip
import importlib
from warnings import warn
from dataclasses import dataclass
def load_package(package, path):
warn("Update path order. Watch out for importing errors!")
if path not in sys.path:
sys.path.insert(0,path)
module = importlib.import_module(package)
return importlib.reload(module)
#dataclass
class PackageInfo:
import_path: str
pip_id: str
packages = [PackageInfo("google.cloud.secretmanager","google-cloud-secret-manager==2.4.0")]
path = '/tmp/python_packages'
pip.main(['install', '-t', path, *[package.pip_id for package in packages]])
for package in packages:
load_package(package.import_path, path=path)
...
finally the perator calls the python-file.py
create_batch = DataprocCreateBatchOperator(
task_id="batch_create",
batch={
"pyspark_batch": {
"main_python_file_uri": "gs://bucket-name/python-file.py",
"args": [
"value1",
"value2"
],
"jar_file_uris": "gs://bucket-name/jar-file.jar",
},
"environment_config": {
"execution_config": {
"subnetwork_uri": "projects/my-project-id/regions/us-central1/subnetworks/my-subnet-name"
},
},
},
batch_id="batch-create",
)
I know the concept of using a deployment package is relatively straightforward, but I've been banging my head on this issue for the last few hours. I am following the documentation from AWS on packaging up Lambda dependencies. I want to write a simple Lambda function to update an entry in a PostgreSQL table upon some event.
I first make a new directory to work in:
mkdir lambdas-deployment && cd lambdas-deployment
Then I make a new virtual environment and install my packages:
virtualenv v-env
source v-env/bin/activate
pip3 install sqlalchemy boto3 psycopg2
My trigger-yaml-parse.py function (it doesn't actually use the sqlalchemy library yet, but I'm just trying to import it successfully):
import logging
import json
import boto3
import sqlalchemy
def lambda_handler(event, context):
records = event['Records']
s3_records = filter(lambda record: record['eventSource'] == 'aws:s3', records)
object_created_records = filter(lambda record: record['eventName'].startswith('ObjectCreated'), s3_records)
for record in object_created_records:
key = record['s3']['object']['key']
print(key)
I've been following the instructions in the AWS documentation.
zip -r trigger-yaml-parse.zip $VIRTUAL_ENV/lib/python3.6/site-packages/
I then add in my function code:
zip -g trigger-yaml-parse.zip trigger-yaml-parse.py
I get an output of updating: trigger-yaml-parse.py (deflated 48%).
Then I upload my new zipped deployment to my S3 build bucket:
aws s3 cp trigger-yaml-parse.zip s3://lambda-build-bucket
I choose upload from S3 in the AWS Lambda console:
However, my Lambda function fails upon execution with the error:
START RequestId: 396c6c3c-3f5b-4df9-b7f1-057842a87eb3 Version: $LATEST
Unable to import module 'trigger-yaml-parse': No module named 'sqlalchemy'
What am I doing wrong? I've followed the documentation from AWS literally step for step.
I think your problem might be in this line:
zip -r trigger-yaml-parse.zip $VIRTUAL_ENV/lib/python3.6/site-packages/
When you create the zip file the compressed files will have the complete path you had in your disk. The python runtime in lambda will not be able to find the libraries.
Instead you should do something like this
cd $VIRTUAL_ENV/lib/python3.6/site-packages/
zip -r /full/path/to/trigger-yaml-parse.zip .
Run unzip -t against both files and you will see the difference.
from AWS documentation:
"Zip packages uploaded with incorrect permissions may cause execution
failure. AWS Lambda requires global read permissions on code files and
any dependent libraries that comprise your deployment package"
So you can use zip info to check permissions:
zipinfo trigger-yaml-parse.zip
-r-------- means only the file owner has permissions.
I am trying to upload a python lambda function with zipped dependencies but for some reason I am constantly getting
"errorMessage": "Unable to import module 'CreateThumbnail'"
whenever I test it.
Here are the steps I took which were almost identical to these docs.
Created and activate a virtualenv with virtualenv ~/lambda_env and source ~/lambda_env/bin/activate
Install Pillow and boto3 with pip install Pillow and pip install boto3
Zip dependencies with cd $VIRTUAL_ENV/lib/python2.7/site-packages and zip -r9 ~/CreateThumbnail.zip *
Add the actual python lambda function to the zip file with zip -g ~/CreateThumbnail.zip CreateThumbnail.py where CreateThumbnail.py is
from __future__ import print_function
import boto3
import os
import sys
import uuid
from PIL import Image
import PIL.Image
s3_client = boto3.client('s3')
def resize_image(image_path, resized_path):
with Image.open(image_path) as image:
image.thumbnail(tuple(x / 2 for x in image.size))
image.save(resized_path)
def handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
download_path = '/tmp/{}{}'.format(uuid.uuid4(), key)
upload_path = '/tmp/resized-{}'.format(key)
s3_client.download_file(bucket, key, download_path)
resize_image(download_path, upload_path)
s3_client.upload_file(upload_path, '{}resized'.format(bucket), key)
Then in the console I set the handler to be CreateThumbnail.handler
Then I upload CreateThumbnail.zip via the aws console and click 'save & test' I get
"errorMessage": "Unable to import module 'CreateThumbnail'"
I am very confused by this because feel like I am following the docs. Can anyone tell me what I am doing wrong here?
Perhaps check out the lambda-uploader project... It handles the packaging of dependencies and is config based.
https://github.com/rackerlabs/lambda-uploader/
Also these links may be helpful:
http://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html
https://markn.ca/2015/10/python-extension-modules-in-aws-lambda/
http://www.perrygeo.com/running-python-with-compiled-code-on-aws-lambda.html
The problem lies in the packaging hierarchy. After you install the dependencies, zip the lambda function as follows (in the example below, lambda_function is the name of my function)
Try this:
pip install requests -t .
zip -r9 lambda_function.zip .
zip -g lambda_function.zip lambda_function.py
Do not let your browser automatically unzip the lambda "project" file after downloading. This seems to corrupt the file when it is re-zipped and used.
The tutorial you pointed out uses python 3.8
And you seem to be using python 2.7
That may be the reason.
I am doing a similar tutorial, but they give us the zip ready to upload but warning to select python 3.7 and not 3.8 or it will fail to run correctly.