I am in the midst of coding a lambda function which will create an alarm based upon some disk metrics. The code so far looks like this:
import collections
from datetime import datetime
import calendar
def lambda_handler(event, context):
client = boto3.client('cloudwatch')
alarm = client.put_metric_alarm(
AlarmName='Disk Monitor',
MetricName='disk_used_percent',
Namespace='CWAgent',
Statistic='Maximum',
ComparisonOperator='GreaterThanOrEqualToThreshold',
Threshold=60.0,
Period=10,
EvaluationPeriods=3,
Dimensions=[
{
'Name': 'InstanceId',
'Value': '{instance_id}'
},
{
'Name': 'AutoScalingGroupName',
'Value': '{instance_id}'
},
{
'Name': 'fstype',
'Value': 'xfs'
},
{
'Name': 'path',
'Value': '/'
}
],
Unit='Percent',
ActionsEnabled=True)
As seen, {instance_id} is a variable because the idea is that this will be used for every instance. However, I am wondering how I would code the same for AutoScalingGroupName because I require this to be a variable also. I know that that the below pulls out the AutoScalingGroupName for me, but how would I add that to the above block in terms of syntax, is my problem:
aws autoscaling describe-auto-scaling-instances --output text --query "AutoScalingInstances[?InstanceId == '<instance_dets>'].{AutoScalingGroupName:AutoScalingGroupName}"
For example, would I add a block beginning as below:
def lambda_handler(event, context):
client = boto3.client('autoscaling')
And if so, how would I then code what is needed in terms of syntax to get the 'Value': '{AutoScalingGroupName}' by which I mean a variable to hold the ASG?
describe_auto_scaling_instances takes InstanceIds as a parameter. So if you know your instance_id you can find its asg as follows:
client = boto3.client('autoscaling')
response = client.describe_auto_scaling_instances(
InstanceIds=[instance_id])
asg_name = ''
if response['AutoScalingInstances']:
asg_name = response['AutoScalingInstances'][0]['AutoScalingGroupName']
print(asg_name)
Related
I'm relatively new to python and I don't know all the mysteries of this language yet so I was wondering if there are any ways I can optimize this code.
I'm trying to list the name of my EC2 instances in an AWS lambda using boto3 and python.
Here's the code :
import json
import boto3
import botocore
import logging
# Create a logging message
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(name)s:%(message)s')
# Create EC2 resource
ec2 = boto3.client('ec2')
ec2_list = ec2.describe_instances()
def lambda_handler(event, context):
try:
for reservation in ec2_list['Reservations']:
for instance in reservation['Instances']:
for tag in instance['Tags']:
print(tag['Value'])
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
except botocore.exceptions.ClientError as e:
logger.debug(e)
raise e
I also tried that as seen in another post, but it didnt work, because reservation var is referenced before assignment - seems logic:
for reservation, instance, tag in itertools.product(ec2_list['Reservations'], reservation['Instances'], instance['Tags']):
print(tag['Value'])
And here is the thing I need to parse (I reduced it a lottttt for the post) :
[
{
'Groups': [],
'Instances': [
{
'Tags': [
{
'Key': 'Name',
'Value': 'Second Instance'
}
],
}
],
},
{
'Groups': [],
'Instances': [
{
'Tags': [
{
'Key': 'Name',
'Value': 'First Instance'
}
],
}
],
}
]
So, right now it's working and I got the 'Value' that I want, but I would like to know if there are ways to simplify/optimize it ? I'm not good at list comprehension yet, so maybe this way ?
Thank you !
You can do it in one line using list comprehensions, but at the end it is similar to have nested loops:
tags = [tag['Value'] for res in ec2_list['Reservations'] for instances in res['Instances'] for tag in instances['Tags']]
What you get is a list with all the 'Values' like this one:
print(tags)
# ['Second Instance', 'First Instance']
I'm executing a Dataflow Pipeline from Google Cloud function and the workflow creation fails with the error shown in the screenshot.
I've created the LauchTemplateParameters according to the official documentation, but some parameters are causing errors:
I would like to set the europe-west1-b zone and n1-standard-4 machine type. What I'm missing?
def call_dataflow(dataflow_name):
service = build('dataflow', 'v1b3')
gcp_project = 'PROJECT_ID'
template_path = 'TEMPLATE_PATH'
template_body = {
'parameters': {
},
'environment': {
'workerZone': 'europe-west1-b',
'subnetwork': 'regions/europe-west1/subnetworks/europe-west1-subnet',
'network': 'dataflow-network',
"machineType": 'n1-standard-4',
'numWorkers': 5,
'tempLocation': 'TEMP_BUCKET_PATH',
'ipConfiguration': 'WORKER_IP_PRIVATE'
},
'jobName': dataflow_name
}
print('series_fulfillment - call_dataflow ' + dataflow_name + ' - lanzando')
request = service.projects().templates().launch(projectId=gcp_project, gcsPath=template_path, body=template_body)
response = request.execute()
return response
I am trying to push some custom sample metrics to Cloudwatch from a lambda function using the code below, but it times out, even with a timeout limit of 30 seconds. Just to be sure, I set full CloudWatch permissions to the lambda function, but to no avail. Any ideas what could cause this?
import boto3
import random
def lambda_handler(event, context):
cloudwatch = boto3.client('cloudwatch')
cloudwatch.put_metric_data(
MetricData = [
{
'MetricName': 'KPIs',
'Dimensions': [
{
'Name': 'PURCHASES_SERVICE',
'Value': 'CoolService'
},
{
'Name': 'APP_VERSION',
'Value': '1.0'
},
],
'Unit': 'None',
'Value': random.randint(1, 500)
},
],
Namespace = 'TestMetrics'
)
I have written a python script to get instance information over email with cron setup and populate metrics as well. With the following code i can see all the logs in cloudwatch logs console. However "dimension" never gets created under cloudwatch events section and not triggering any mail as well.
import boto3
import json
import logging
from datetime import datetime
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def post_metric(example_namespace, example_dimension_name, example_metric_name, example_dimension_value, example_metric_value):
cw_client = boto3.client("cloudwatch")
response = cw_client.put_metric_data(
Namespace=example_namespace,
MetricData=[
{
'MetricName': example_metric_name,
'Dimensions': [
{
'Name': example_dimension_name,
'Value': example_dimension_value
},
],
'Timestamp': datetime.datetime.now(),
'Value': int(example_metric_value)
},
]
)
def lambda_handler(event, context):
logger.info(event)
ec2_client = boto3.client("ec2")
sns_client = boto3.client("sns")
response = ec2_client.describe_instances(
Filters=[
{
'Name': 'tag:Name',
'Values': [
'jenkins-slave-*'
]
}
]
)['Reservations']
for reservation in response:
ec2_instances = reservation["Instances"]
for instance in ec2_instances:
myInstanceId = (instance['InstanceId'])
myInstanceState = (instance['State']['Name'])
myInstance = \
(
{
'InstanceId': (myInstanceId),
'InstanceState': (myInstanceState),
}
)
logger.info(json.dumps(myInstance)
post_metric("Jenkins", "ciname", "orphaned-slaves", myInstanceId, 1)
# Send message to SNS (Testing purpose)
SNS_TOPIC_ARN = 'arn:aws:sns:us-east-1:1234567890:example-instance-alarms'
sns_client.publish(
TopicArn = SNS_TOPIC_ARN,
Subject = 'Instance Info: ' + myInstanceId,
Message = 'Instance id: ' + myInstanceId
)
Can anyone please help if i am missing anything here. Thanks in advance.
You forgot to add required fields such as EvaluationPeriods, AlarmName and etc. to your put_metric_data according to documentation.
You can use this for an example.
AWS Lambda / python 2.7 / boto3
I'm trying to revoke one rule out of many in a security group (SG_we_are_working_with) but receive error
An error occurred (InvalidGroup.NotFound) when calling the
RevokeSecurityGroupIngress operation: The security group 'sg-xxxxx'
does not exist in default VPC 'none'
The SG is really not in the default VPC but custom one, but I mention VPC id explicitly!
SG_we_are_working_with = 'sg-xxxxx'
SG_which_is_the_source_of_the_traffic = 'sg-11111111'
VpcId = 'vpc-2222222'
#first I load the group to find the necessary rule
ec2 = boto3.resource('ec2')
security_group = ec2.SecurityGroup(SG_we_are_working_with)
security_group.load() # get current data
# here is loop over rules
for item in security_group.ip_permissions:
here we take the necessary item, it has something like:
{
"PrefixListIds": [],
"FromPort": 6379,
"IpRanges": [],
"ToPort": 11211,
"IpProtocol": "tcp",
"UserIdGroupPairs": [ {
"UserId": "00111111111",
"Description": "my descr",
"GroupId": "sg-11111111"
} ],
"Ipv6Ranges": []
}
then:
# now attempt to delete, the necessary data is in 'item' variable:
IpPermissions=[
{
'FromPort': item['FromPort'],
'ToPort': item['ToPort'],
'IpProtocol': 'tcp',
'UserIdGroupPairs': [
{
'Description': item['UserIdGroupPairs'][0]["Description"],
'GroupId': item['UserIdGroupPairs'][0]["GroupId"],
'UserId': item['UserIdGroupPairs'][0]["UserId"],
'VpcId': str(VpcId)
},
]
}
]
security_group.revoke_ingress(
FromPort = item['FromPort'],
GroupName = SG_we_are_working_with,
IpPermissions = IpPermissions,
IpProtocol = 'tcp',
SourceSecurityGroupName = SG_which_is_the_source_of_the_traffic,
ToPort = item['ToPort']
)
The doc I'm using is here
What am I doing wrong?
Thank you.
I have found that the easiest way to revoke permissions is to pass-in the permissions already on the security group:
import boto3
# Connect to the Amazon EC2 service
ec2 = boto3.resource('ec2')
# Retrieve the security group
security_groups = ec2.security_groups.filter(GroupNames=['MY-GROUP-NAME'])
# Delete all rules in the group
for group in security_groups:
group.revoke_ingress(IpPermissions = group.ip_permissions)
All code above is correct except the last part, have no idea why it is not explained in the doc.
Solution, using the code from the question:
security_group.revoke_ingress(
IpPermissions = IpPermissions,
)
So, all that stuff
FromPort = item['FromPort'],
GroupName = SG_we_are_working_with,
IpProtocol = 'tcp',
SourceSecurityGroupName = SG_which_is_the_source_of_the_traffic,
ToPort = item['ToPort']
was excessive and caused the error.