"error message" with lambda handler in aws lambda - python

Hi I am learning how to use AWS lambda functions and I don't understand how to use the lambda handler. When I use this function in the results I get the expected return in function logs.
import boto3
session = boto3.Session(
aws_access_key_id='XXXXXXXXXXXXX',
aws_secret_access_key='XXXXXXXXXXXXXXX')
#Then use the session to get the resource
s3 = session.resource('s3')
my_bucket = s3.Bucket('XXXXX')
for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object.key)
But when I added the lambda_handler doesn't work
import boto3
def lambda_handler(event, context):
session = boto3.Session(
aws_access_key_id='XXXXXXXXXXXXX',
aws_secret_access_key='XXXXXXXXXXXX')
#Then use the session to get the resource
s3 = session.resource('s3')
my_bucket = s3.Bucket('XXXXXX')
for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object.key)
For this case I received as response
{
"errorMessage": "2022-05-10T14:50:10.023Z a840a005-9af0-4827-919a-7e2bd7eb0aae Task timed out after 3.02 seconds"
}
If anyone has knowledge of what I am doing wrong I would appreciate it.

Try this without any security settings on a S3 bucket that you make public for testing purposes. You can then run/test this from you local machine to help you with debugging before you deploy it.
def list_file_in_bucket(event, context):
bucket_name = event['bucket_name']
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(bucket_name)
for my_bucket_object in my_bucket.objects.all():
print(my_bucket_object.key)
Tested with
def test_s3():
test_bucket = 'public-test-bucket'
event = {}
event['bucket_name'] = test_bucket
function.list_file_in_bucket(event, {})
assert True == False
Obviously you want to change the assert.

Related

Boto3 S3 list_objects_v2 Not Returning Any Objects

I'm using Boto3 to try to get a list of keys from an S3 bucket via an AWS Lambda Python script. No matter what I try, the bucket returns no objects.
import json, boto3, os
def getConfig():
cfg = {
"aws_key_id": os.getenv("AWS_KEY_ID", ""),
"aws_secret": os.getenv("AWS_SECRET", ""),
}
return cfg
def lambda_handler(event, context):
cfg = getConfig()
bucket_name = "zachs-taxi"
session = boto3.Session(
aws_access_key_id=cfg.get('aws_key_id'),
aws_secret_access_key=cfg.get('aws_secret')
)
s3 = session.client('s3')
I've tried both of the following but both return empty:
response = s3.list_objects_v2(
Bucket=bucket_name)
for content in response.get('Contents', []):
print(content['Key'])
And
paginator = s3.get_paginator("list_objects_v2")
for page in paginator.paginate(Bucket=bucket_name):
for content in page.get('Contents', ()):
print(content['Key'])
The S3 bucket is public and I can access it. Inside there is a folder called content and within that folder is a .png file.
Any help would be appreciated. Thanks!
Your code ran perfectly fine for me (with a different bucket name) when I ran it on my own computer:
import boto3
bucket_name = "my-bucketname"
s3 = boto3.client('s3')
response = s3.list_objects_v2(Bucket=bucket_name)
for content in response.get('Contents', []):
print(content['Key'])

RequestTimeTooSkewed: boto3 upload sometimes fails when run in thread

I often want to upload files to Amazon S3 in the background. I'm using threads for this.
Sometimes, but not always, this results in a RequestTimeTooSkewed error.
import threading
from pathlib import Path
import boto3
def upload_file_s3(
local_path: Path,
s3_path: Path,
bucket: str,
) -> None:
ACCESS_KEY = ""
SECRET_KEY = ""
client = boto3.client(
"s3",
aws_access_key_id=ACCESS_KEY,
aws_secret_access_key=SECRET_KEY,
region_name="eu-central-1",
use_ssl=False,
verify=False,
)
client.upload_file(str(local_path), bucket, str(s3_path))
local_path = Path('/tmp/testfile')
s3_path = Path('testfile')
bucket = 'my_bucket'
thread = threading.Thread(target=upload_file_s3, args=(local_path, s3_path, bucket))
thread.start()
botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the UploadPart operation: The difference between the request time and the current time is too large.

Boto3 AWS lambda not triggering

I am using code similar to below to trigger an AWS Lambda function on my AWS educate account, when running this nothing triggers on the lambda (the lambda works with the same payload through the test configuration). My session and permissions are also correct as I am able to use boto3 to access S3 resources with the same credentials. What can I try to attempt to fix/troubleshoot this issue?
Apologies if this is vague (I know it is) but I am very confused on why this is happening
import boto3
import json
AWS_ACCESS_KEY_ID ="XXXXXXXXXXXXXXXXXX"
AWS_SECRET_ACCESS_KEY ="XXXXXXXXXXXXXXXXXXXXXXXXXX"
REGION = 'us-east-1'
session = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
payload = json.dumps({"function":"tweets","amount":10,"time":10})
client = boto3.client('lambda',
region_name=REGION,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_session_token = session)
response = client.invoke(
FunctionName="MyFunctionARN",
InvocationType="RequestResponse",
Payload=payload
)
Every lambda function has an handler function which is the entry for the code. By default it is lambda_handler. You can also change the default handler function under Runtime settings. The following code will solve your problem.
import boto3
import json
AWS_ACCESS_KEY_ID ="XXXXXXXXXXXXXXXXXX"
AWS_SECRET_ACCESS_KEY ="XXXXXXXXXXXXXXXXXXXXXXXXXX"
REGION = 'us-east-1'
session = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
def lambda_handler(event,context):
payload = json.dumps({"function":"tweets","amount":10,"time":10})
client = boto3.client('lambda',
region_name=REGION,
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
aws_session_token = session)
response = client.invoke(
FunctionName="MyFunctionARN",
InvocationType="RequestResponse",
Payload=payload
)

AWS Lambda returns permission denied trying to GetObject from S3 bucket

I did create a lambda function which is supposed to upload data into a DynamoDB when an file is upload in a S3 bucket. However, I get a "GetObject operation: permission denied" in CloudWatch when a file is uploaded in the bucket.
The lambda function has an IAM role attached, with those policies: AmazonlambdaFullAccess, AmazonS3FullAccess, AmazonCloudWatchLogsFullAccess, AmazonDynamoDBFullAccess. It has lambda.amazonaws.com as trusted entities.
The bucket has no policies attached.
import boto3
import json
import urllib
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('wireshark')
s3 = boto3.client('s3')
tests3 = boto3.resource(u's3')
def lambda_handler(event, context):
source_bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.parse.quote_plus(event['Records'][0]['s3']['object']['key'])
copy_source = {'Bucket':source_bucket , 'Key':key}
print(event)
print("Log stream name : ", context.log_stream_name)
print("Log group name : ", context.log_group_name)
print("Request Id:", context.aws_request_id)
print("Mem. limit(MB): ", context.memory_limit_in_mb)
#just print function
print("Log stream name : ", context.log_stream_name)
print("Log group name : ", context.log_group_name)
print("Request Id:", context.aws_request_id)
print("Mem. limit(MB): ", context.memory_limit_in_mb)
try:
print("Using waiter to waiting for object to persist thru s3 service")
waiter = s3.get_waiter('object_exists')
waiter.wait(Bucket=source_bucket, Key=key)
print("Accessing the receied file and reading the same")
bucket = tests3.Bucket(u'awslambdas3test2')
obj = bucket.Object(key=key)
response = obj.get()
print("response from file object")
print(response)
In Cloudwatch : An error occurred (AccessDenied) when calling the GetObject operation: Access Denied.
I've been through the "policies simulator" from aws. This IAM role should be able to GetObject from any S3 bucket.
Thank you for your help.
Code mostly from GitHub.
Here is an AWS Lambda function that will print the contents of the file:
import boto3
import os
def lambda_handler(event, context):
s3_client = boto3.client('s3')
# For each record
for record in event['Records']:
# Get Bucket and Key
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
# Print the bucket & key to the logs
print(bucket, key)
# Download object
local_filename = '/tmp/' + key
s3_client.download_file(bucket, key, local_filename)
# Print contents to log (just to demonstrate concept)
for line in open(local_filename):
print(line)
# Delete file when done, to clear space for future execution
os.remove(local_filename)
Create an Amazon S3 event on a bucket to trigger this Lambda function and it will print the filename and the contents of the file to CloudWatch Logs. This should be a good test to determine whether the program is with your code or with permissions.

How to generate url from boto3 in amazon web services

I have a Bucket in s3 and I am trying to pull the url of the image that is in there.
I am using boto3 and boto3 doesn't seem to have an implemented generate url method.
They have a core method, that generates url like this,
import botocore.session
session = botocore.session.get_session()
client = session.create_client('s3')
presigned_url = client.generate_presigned_url(
'get_object', Params={'Bucket': self.bucket_name, 'Key': self.key})
One thing I am forced to do is, I have to send the parameters along with each request using session object. And the above method does not allow me to set the session variables (ie .. aws credentials)
The closest I can get is this
session = Session(aws_access_key_id='342342342342', aws_secret_access_key='3434234322', region_name='us-east-1')
s3 = session.resource('s3')
object = s3.Object('my-dev-bucket', 'amazonKeyString')
print object.get()["Body"]
This gets me amazon s3 object which is an object called
botocore.response.StreamingBody object at 0x7ffaff8cef50
Can I get a url of the image this way?
Able to get results and did not face any issues in getting the signed URL.
I used the default session since my aws creds were stored locally in "~/.aws/credentials" file and my default region is set as needed ~/.aws/config
import boto3
s3Client = boto3.client('s3')
s3Client.generate_presigned_url('get_object', Params = {'Bucket': 'www.mybucket.com', 'Key': 'hello.txt'}, ExpiresIn = 100)
If you need to pass params for Session, import boto3.session and create custom session
import boto3.session
session = boto3.session.Session(region_name='eu-central-1')
s3Client = session.client('s3')
If you don't want to use aws configure command, you can pass the credentials directly like this and generate the public URL.
def generate_public_url(bucket_name, file_name, aws_region, aws_key_id, aws_secret, timeout=300):
#if not object_exists(bucket_name, file_name):
# raise Exception(f"0 or many items found in bucket '{bucket_name}' with key '{file_name}')")
s3_client = boto3.client('s3', config=Config(signature_version='s3v4'),
region_name=aws_region, aws_access_key_id=aws_key_id, aws_secret_access_key=aws_secret)
url = s3_client.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket_name,
'Key': file_name
},
ExpiresIn=timeout # seconds
)
return url

Categories