call to http()URl & download the file in S3 bucket. its working. then in 2nd part i am calling guardduty & give location of s3 file to create threat intel set. while running code i am getting below error:-
Response
{
"errorMessage": "'BadRequestException' object has no attribute 'message'",
"errorType": "AttributeError",
"requestId": "bec541eb-a315-4f65-9fa9-3f1139e31f86",
"stackTrace": [
" File \"/var/task/lambda_function.py\", line 38, in lambda_handler\n if \"name already exists\" in error.message:\n"
]
}
i want to create threat intel set using the file which is in S3--(downloaded from the URl)
code:-
import boto3
from datetime import datetime
import requests.packages.urllib3 as urllib3
def lambda_handler(event, context):
url='https://rules.emergingthreats.net/blockrules/compromised-ips.txt' # put your url here
bucket = 'awssaflitetifeeds-security' #your s3 bucket
key = 'GDfeeds/compromised-ips.csv' #your desired s3 path or filename
s3=boto3.client('s3')
http=urllib3.PoolManager()
s3.upload_fileobj(http.request('GET', url,preload_content=False), bucket, key)
#------------------------------------------------------------------
# Guard Duty
#------------------------------------------------------------------
location = "https://s3://awssaflitetifeeds-security/GDfeeds/compromised-ips.csv"
timeStamp = datetime.now()
name = "TF-%s"%timeStamp.strftime("%Y%m%d")
guardduty = boto3.client('guardduty')
response = guardduty.list_detectors()
if len(response['DetectorIds']) == 0:
raise Exception('Failed to read GuardDuty info. Please check if the service is activated')
detectorId = response['DetectorIds'][0]
try:
response = guardduty.create_threat_intel_set(
Activate=True,
DetectorId=detectorId,
Format='FIRE_EYE',
Location=location,
Name=name
)
except Exception as error:
if "name already exists" in error.message:
found = False
response = guardduty.list_threat_intel_sets(DetectorId=detectorId)
for setId in response['ThreatIntelSetIds']:
response = guardduty.get_threat_intel_set(DetectorId=detectorId, ThreatIntelSetId=setId)
if (name == response['Name']):
found = True
response = guardduty.update_threat_intel_set(
Activate=True,
DetectorId=detectorId,
Location=location,
Name=name,
ThreatIntelSetId=setId
)
break
if not found:
raise
#-------------------------------------------------------------------
# Update result data
#------------------------------------------------------------------
result = {
'statusCode': '200',
'body': {'message': "You requested: %s day(s) of /view/iocs indicators in CSV"%environ['DAYS_REQUESTED']}
}
except Exception as error:
logging.getLogger().error(str(error))
responseStatus = 'FAILED'
reason = error.message
result = {
'statusCode': '500',
'body': {'message': error.message}
}
finally:
#------------------------------------------------------------------
# Send Result
#------------------------------------------------------------------
if 'ResponseURL' in event:
send_response(event, context, responseStatus, responseData, event['LogicalResourceId'], reason)
The reason you are getting that error message is because the exception being returned from guardduty.create_threat_intel_set does not have the message attribute directly on the exception. I think you want either error.response['Message'] or error.response['Error']['Message'] for this exception case.
A couple of other suggestions:
you should replace the except Exception which is matching the exception showing an already-existing name with something more targeted. I'd recommend looking at what exceptions the guardduty client can throw for the particular operation and catch just the one you care about.
it is likely better to check that error.response['Error']['Code'] is exactly the error you want rather than doing a partial string match.
Related
I want to query a csv file present in s3 using SQL expression for which I want to compare the CSV header CustomerId with an outside variable which will be received through an api. How do I compare it in where clause. Here's my code:
import json
import urllib.parse
import io
import boto3
# import requests
s3 = boto3.client('s3')
def lambda_handler(event, context):
CustomerId = event["queryStringParameters"]['CustomerId']
print(CustomerId)
try:
resp = s3.select_object_content(
Bucket='dhanshri-s3-bucket',
Key='sample_data.csv',
ExpressionType='SQL',
Expression="SELECT s.\"List of Recommended Products\" FROM s3object s where s.\"CustomerId\" = CustomerID " ,
InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}, 'CompressionType': 'NONE'},
OutputSerialization = {'CSV': {}},
)
for event in resp['Payload']:
if 'Records' in event:
records = event['Records']['Payload'].decode('utf-8')
print(records)
elif 'Stats' in event:
statsDetails = event['Stats']['Details']
print("Stats details bytesScanned: ")
print(statsDetails['BytesScanned'])
print("Stats details bytesProcessed: ")
print(statsDetails['BytesProcessed'])
print("Stats details bytesReturned: ")
print(statsDetails['BytesReturned'])
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.')
raise e
return {
"statusCode": 200,
"body": json.dumps({
"message": "hello world",
# "location": ip.text.replace("\n", "")
}),
}
I am trying to follow the guide here to automate the rotation of keys for IAM users- https://awsfeed.com/whats-new/apn/automating-rotation-of-iam-user-access-and-secret-keys-with-aws-secrets-manager
Essentially I'm wanting to get new keys every 60 days, deactivate the old keys every 80 days, and then delete/remove old keys every 90 days.
I have slightly modified it to get new keys every 60 days instead of 90 and here is the lambda function:
import json
import boto3
import base64
import datetime
import os
from datetime import date
from botocore.exceptions import ClientError
iam = boto3.client('iam')
secretmanager = boto3.client('secretsmanager')
#IAM_UserName=os.environ['IAM_UserName']
#SecretName=os.environ['SecretName']
def create_key(uname):
try:
IAM_UserName=uname
response = iam.create_access_key(UserName=IAM_UserName)
AccessKey = response['AccessKey']['AccessKeyId']
SecretKey = response['AccessKey']['SecretAccessKey']
json_data=json.dumps({'AccessKey':AccessKey,'SecretKey':SecretKey})
secmanagerv=secretmanager.put_secret_value(SecretId=IAM_UserName,SecretString=json_data)
emailmsg="New "+AccessKey+" has been create. Please get the secret key value from secret manager"
ops_sns_topic ='arn:aws:sns:us-east-1:redacted'
sns_send_report = boto3.client('sns',region_name='us-east-1')
sns_send_report.publish(TopicArn=ops_sns_topic, Message=emailmsg, Subject="New Key created for user"+ IAM_UserName)
except ClientError as e:
print (e)
def deactive_key(uname):
try:
#GET PREVIOUS AND CURRENT VERSION OF KEY FROM SECRET MANAGER
IAM_UserName=uname
getpresecvalue=secretmanager.get_secret_value(SecretId=IAM_UserName,VersionStage='AWSPREVIOUS')
#getcursecvalue=secretmanager.get_secret_value(SecretId='secmanager3',VersionStage='AWSCURRENT')
#print (getpresecvalue)
#print (getcursecvalue)
preSecString = json.loads(getpresecvalue['SecretString'])
preAccKey=preSecString['AccessKey']
#GET CREATION DATE OF CURRENT VERSION OF ACCESS KEY
#curdate=getcursecvalue['CreatedDate']
#GET TIMEZONE FROM CREATION DATE
#tz=curdate.tzinfo
#CALCULATE TIME DIFFERENCE BETWEEN CREATION DATE AND TODAY
#diff=datetime.datetime.now(tz)-curdate
#diffdays=diff.days
#print (curdate)
#print (tz)
#print (diffdays)
#print (preAccKey)
#IF TIME DIFFERENCE IS MORE THAN x NUMBER OF DAYS THEN DEACTIVATE PREVIOUS KEY AND SEND A MESSAGE
#if diffdays >= 1:
iam.update_access_key(AccessKeyId=preAccKey,Status='Inactive',UserName=IAM_UserName)
emailmsg="PreviousKey "+preAccKey+" has been disabled for IAM User"+IAM_UserName
ops_sns_topic ='arn:aws:sns:us-east-1:redacted'
sns_send_report = boto3.client('sns',region_name='us-east-1')
sns_send_report.publish(TopicArn=ops_sns_topic, Message=emailmsg, Subject='Previous Key Deactivated')
return
except ClientError as e:
print (e)
#else:
# print ("Current Key is not older than 10 days")
#print (datediff)
def delete_key(uname):
try:
IAM_UserName=uname
print (IAM_UserName)
getpresecvalue=secretmanager.get_secret_value(SecretId=IAM_UserName,VersionStage='AWSPREVIOUS')
#getcursecvalue=secretmanager.get_secret_value(SecretId='secmanager3',VersionStage='AWSCURRENT')
preSecString = json.loads(getpresecvalue['SecretString'])
preAccKey=preSecString['AccessKey']
#print (preAccKey)
#GET CREATION DATE OF CURRENT VERSION OF ACCESS KEY
#curdate=getcursecvalue['CreatedDate']
#GET TIMEZONE FROM CREATION DATE
#tz=curdate.tzinfo
#CALCULATE TIME DIFFERENCE BETWEEN CREATION DATE AND TODAY
#diff=datetime.datetime.now(tz)-curdate
#diffdays=diff.days
#IF TIME DIFFERENCE IS MORE THAN x NUMBER OF DAYS THEN DEACTIVATE PREVIOUS KEY AND SEND A MESSAGE
#if diffdays >= 1:
keylist=iam.list_access_keys (UserName=IAM_UserName)
#print (keylist)
for x in range(2):
prevkeystatus=keylist['AccessKeyMetadata'][x]['Status']
preacckeyvalue=keylist['AccessKeyMetadata'][x]['AccessKeyId']
print (prevkeystatus)
if prevkeystatus == "Inactive":
if preAccKey==preacckeyvalue:
print (preacckeyvalue)
iam.delete_access_key (UserName=IAM_UserName,AccessKeyId=preacckeyvalue)
emailmsg="PreviousKey "+preacckeyvalue+" has been deleted for user"+IAM_UserName
ops_sns_topic ='arn:aws:sns:us-east-1:redacted'
sns_send_report = boto3.client('sns',region_name='us-east-1')
sns_send_report.publish(TopicArn=ops_sns_topic, Message=emailmsg, Subject='Previous Key has been deleted')
return
else:
print ("secret manager previous value doesn't match with inactive IAM key value")
else:
print ("previous key is still active")
return
except ClientError as e:
print (e)
#else:
#print ("Current Key is not older than 10 days")
def lambda_handler(event, context):
# TODO implement
faction=event ["action"]
fuser_name=event ["username"]
if faction == "create":
status = create_key(fuser_name)
print (status)
elif faction == "deactivate":
status = deactive_key(fuser_name)
print (status)
elif faction == "delete":
status = delete_key(fuser_name)
print (status)
when testing the function I get the below error message:
Response
{
"errorMessage": "'action'",
"errorType": "KeyError",
"stackTrace": [
[
"/var/task/lambda_function.py",
108,
"lambda_handler",
"faction=event [\"action\"]"
]
]
}
Function Logs
START RequestId: 45b13b13-e992-40fe-b2e8-1f2cc53a86e5 Version: $LATEST
'action': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 108, in lambda_handler
faction=event ["action"]
KeyError: 'action'
I have the following policies on the role and group for my user:
IAMReadOnlyAccess
AmazonSNSFullAccess
and a custom policy with the following actions:
"iam:ListUsers",
"iam:CreateAccessKey",
"iam:DeleteAccessKey",
"iam:GetAccessKeyLastUsed",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:UpdateAccessKey"
My EventBridge has the constant (JSON text) as {"action":"create","username":"secmanagert3"}
Looking to see why I keep getting errors on the lambda handler
Edit:
After printing out the environment variables and even, I have these function logs:
Function Logs
START RequestId: c5cabedf-d806-4ca5-a8c6-1ded84c39a39 Version: $LATEST
## ENVIRONMENT VARIABLES
environ({'PATH': '/var/lang/bin:/usr/local/bin:/usr/bin/:/bin:/opt/bin', 'LD_LIBRARY_PATH': '/var/lang/lib:/lib64:/usr/lib64:/var/runtime:/var/runtime/lib:/var/task:/var/task/lib:/opt/lib', 'LANG': 'en_US.UTF-8', 'TZ': ':UTC', '_HANDLER': 'lambda_function.lambda_handler', 'LAMBDA_TASK_ROOT': '/var/task', 'LAMBDA_RUNTIME_DIR': '/var/runtime', 'AWS_REGION': 'us-east-2', 'AWS_DEFAULT_REGION': 'us-east-2', 'AWS_LAMBDA_LOG_GROUP_NAME': '/aws/lambda/AutomatedKeyRotation', 'AWS_LAMBDA_LOG_STREAM_NAME': '2021/10/14/[$LATEST]7f05c89773e240788lda232ec5dh8hg04', 'AWS_LAMBDA_FUNCTION_NAME': 'AutomatedKeyRotation', 'AWS_LAMBDA_FUNCTION_MEMORY_SIZE': '128', 'AWS_LAMBDA_FUNCTION_VERSION': '$LATEST', '_AWS_XRAY_DAEMON_ADDRESS': 'xxx.xxx.xx.xxx', '_AWS_XRAY_DAEMON_PORT': '2000', 'AWS_XRAY_DAEMON_ADDRESS': 'xxx.xxx.xx.xxx:2000', 'AWS_XRAY_CONTEXT_MISSING': 'LOG_ERROR', '_X_AMZN_TRACE_ID': 'Root=1-61686a72-0v9fgta25cb9ca19568ae978;Parent=523645975780233;Sampled=0', 'AWS_EXECUTION_ENV': 'AWS_Lambda_python3.6', 'AWS_LAMBDA_INITIALIZATION_TYPE': 'on-demand', 'AWS_ACCESS_KEY_ID': 'key-id-number', 'AWS_SECRET_ACCESS_KEY': 'top-secret-key', 'AWS_SESSION_TOKEN': 'very-long-token', 'PYTHONPATH': '/var/runtime'})
## EVENT
{'key1': 'value1', 'key2': 'value2', 'key3': 'value3'}
'action': KeyError
Traceback (most recent call last):
File "/var/task/lambda_function.py", line 112, in lambda_handler
faction=event["action"]
KeyError: 'action'
As you can see from the log file, your event doesn't have action and username variables. That's why you're getting the KeyError.
The problem is that you are testing this by running a test from the Lambda function, and not through the Cloudwatch. To solve this:
In your Lambda function, open the "Test" tab. There, you can see what your event looks like. You can either manually change it, to add the values you need in the JSON, or you can choose from given templates (among others, there's Cloudwatch as a template). Once you added action and username to the JSON, it won't throw this error
You can create a Cloudwatch event, as instructed in the post that you shared, and invoke that event. That way, you will see exactly what the event will look like when you actually invoke it in production.
Request Method: PATCH
There is a Query String Parameter section
Your code cannot be run to reproduce your error. Check what you get as a response here:
r = requests.patch(url, headers=self._construct_header(),data=body)
response = getattr(r,'_content').decode("utf-8")
response_json = json.loads(response)
If you pass invalid json to json.loads(), then an error occurs with a similar message.
import json
response = b'test data'.decode("utf-8")
print(response)
response_json = json.loads(response)
print(response_json)
Output:
test data
Traceback (most recent call last):
...
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
EDIT:
In your case, to avoid an error, you need to add an if-else block. After receiving the response, you need to check what exactly you received.
r = requests.patch(url, headers=self._construct_header(),data=body)
# if necessary, check content type
print(r.headers['Content-Type'])
response = getattr(r,'_content').decode("utf-8")
if r.status_code == requests.codes.ok:
# make sure you get the string "success"
# if necessary, do something with the string
return response
else:
# if necessary, check what error you have: client or server errors, etc.
# or throw an exception to indicate that something went wrong
# if necessary, make sure you get the error in json format
# you may also get an error if the json is not valid
# since your api returns json formatted error message:
response_dict = json.loads(response)
return response_dict
In these cases, your function returns the string "success" or dict with a description of the error.
Usage:
data = {
'correct_prediction': 'funny',
'is_accurate': 'False',
'newLabel': 'funny',
}
response = aiservice.update_prediction(data)
if isinstance(response, str):
print('New Prediction Status: ', response)
else:
# provide error information
# you can extract error description from dict
This is my test so far:
test_500(self):
client = ClientConfiguration(token=token, url=url)
client.url = 'https://localhost:1234/v1/' + bucket
keys = None
try:
get_bucket = json.loads(str(client.get_bucket(bucket)))
result = get_bucket['result']
except Exception as e:
expected_status_code = 500
failure_message = "Expected status code %s but got status code %s" % (expected_status_code, e)
self.assertEquals(e, expected_status_code, failure_message)
I need to write a mock that will return a 500 response when the 'https://localhost:1234/v1/' + bucket url is used. Can this be done with unittest and if so, how or where can I find some documentation on this? I've been through this site, the unittest documentation and Youtube and can't find anythingspecific to what I want to do.
I ended up using this to create my test.
The end result is:
#responses.activate
test_500(self):
responses.add(responses.GET, 'https://localhost:1234/v1/' + bucket,
json={'error': 'server error'}, status=500)
client = ClientConfiguration(token=token, url=url)
client.url = 'https://localhost:1234/v1/'
keys = None
try:
get_bucket = json.loads(str(client.get_bucket(bucket)))
result = get_bucket['result']
except Exception as e:
expected_status_code = 500
failure_message = "Expected status code %s but got status code %s" % (expected_status_code, e)
self.assertEquals(e, expected_status_code, failure_message)
I would like to retrieve some meta data I added (using the console x-amz-meta-my_variable) every time I upload an object to S3.
I have set up lambda through the console to trigger every time an object is uploaded to my bucket
I am wondering if I can use something like variable = event['Records'][0]['s3']['object']['my_variable'] to retrieve this data or if I have to connect back to S3 with the bucket and key and then call some function to retrieve it?
Below is the code:
from __future__ import print_function
import json
import urllib
import boto3
print('Loading function')
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get the object from the event and show its content type
bucket = event['Records'][0]['s3']['bucket']['name']
key = urllib.unquote_plus(event['Records'][0]['s3']['object']['key']).decode('utf8')
# variable = event['Records'][0]['s3']['object']['my_variable']
try:
response = s3.get_object(Bucket=bucket, Key=key)
# Call some function here?
print("CONTENT TYPE: " + response['ContentType'])
return response['ContentType']
except Exception as e:
print(e)
print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
raise e
The metadata is not in the event but in the head object.
The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you are interested only in an object's metadata. To use HEAD, you must have READ access to the object.
A HEAD request has the same options as a GET operation on an object. The response is identical to the GET response except that there is no response body.
s3.head_object(Bucket=bucket, Key=key)
Below code is a snippet to get the metadata.
from __future__ import print_function
import boto3, logging
s3 = boto3.client('s3')
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
for record in event['Records']
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
response = s3.head_object(Bucket=bucket, Key=key)
logger.info('Response: {}'.format(response))
print("Author : " + response['Metadata']['author'])
print("Description : " + response['Metadata']['description'])
Output:
[INFO] 2016-05-18T01:30:47.900Z 241f0cfc-1c98-12e6-b9a7-cf406f32a0dc Response: {u'AcceptRanges': 'bytes', u'ContentType': 'binary/octet-stream', 'ResponseMetadata': {'HTTPStatusCode': 200, 'HostId': 'K8JMVbEt5xA+qXuXOedb1y5nxuv6scMXnNH/rHVtxcg=', 'RequestId': 'D05BE92E55E0'}, u'LastModified': datetime.datetime(2016, 5, 17, 22, 54, 37, tzinfo=tzutc()), u'ContentLength': 94320, u'ETag': '"0e4d457d912bce9ff81952"', u'Metadata': {'author': 'Satyajit Ray', 'description':'He was an Indian filmmaker, widely regarded as one of the greatest filmmakers of the 20th century.'}}
Author : Satyajit Ray
Description : He was an Indian filmmaker, widely regarded as one of the greatest filmmakers of the 20th century.
You can get the meta-data from the head object where you have to pass an object which contains bucket and key:-
Eg : Below is a code(in NodeJs) that you have to use in order to get the meta-data which was attached with the pre-signedUrl while generating it from the aws-sdk.
//for generating pre-signed url with meta data
exports.getSignedUrl = async (myKey, metadata) => {
const signedUrlExpireSeconds = 20000;
const params = {
Bucket: BUCKET,
Key: myKey,
Expires: signedUrlExpireSeconds,
/* ACL: 'bucket-owner-full-control', ContentType:'image/jpeg', */
ContentType: 'image/jpeg',
ACL: 'public-read',
Metadata: metadata,
};
const url = await s3.getSignedUrl('putObject', params);
return url;
};
//for obtainig the meta data for the bucket and key
const s3Object = reqBody.Records[0].s3;
const bucketName = s3Object.bucket.name;
const objectKey = s3Object.object.key;
const params = {
Bucket: bucketName,
Key: objectKey,
};
const data = await s3.headObject(params).promise();
const metadata = (!data) ? null : data.Metadata;```