New to AWS CDK and I'm trying to create a load balanced fargate service with the construct ApplicationLoadBalancedFargateService.
I have an existing image on ECR that I would like to reference and use. I've found the ecs.ContainerImage.from_ecr_repository function, which I believe is what I should use in this case. However, this function takes an IRepository as a parameter and I cannot find anything under aws_ecr.IRepository or aws_ecr.Repository to reference a pre-existing image. These constructs all seem to be for making a new repository.
Anyone know what I should be using to get the IRepository object for an existing repo? Is this just not typically done this way?
Code is below. Thanks in Advance.
from aws_cdk import (
# Duration,
Stack,
# aws_sqs as sqs,
)
from constructs import Construct
from aws_cdk import (aws_ec2 as ec2, aws_ecs as ecs,
aws_ecs_patterns as ecs_patterns,
aws_route53,aws_certificatemanager,
aws_ecr)
class NewStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
_repo = aws_ecr.Repository(self, 'id1', repository_uri = repo_uri)
vpc = ec2.Vpc(self, "applications", max_azs=3) # default is all AZs in region
cluster = ecs.Cluster(self, "id2", vpc=vpc)
hosted_zone = aws_route53.HostedZone.from_lookup(self,
'id3',
domain_name = 'domain'
)
certificate = aws_certificatemanager.Certificate.from_certificate_arn(self,
id4,
'cert_arn'
)
image = ecs.ContainerImage.from_ecr_repository(self, _repo)
ecs_patterns.ApplicationLoadBalancedFargateService(self, "id5",
cluster=cluster, # Required
cpu=512, # Default is 256
desired_count=2, # Default is 1
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image = image,
container_port=8000),
memory_limit_mib=2048, # Default is 512
public_load_balancer=True,
domain_name = 'domain_name',
domain_zone = hosted_zone,
certificate = certificate,
redirect_http = True)
You are looking for from_repository_attributes() to create an instance of IRepository from an existing ECR repository.
Related
I'm working on a project to automate the deployment of VMs in GCP using Python. I recently figured out how to create a custom image using Python and I'm now at the step where I need to create the VM. I have the example template from the Google documentation but I'm stuck on a particular method and don't understand the argument that it wants.
I can successfully get the image from the family, create and attached disk from the image, but when I get to create_instance function I'm unsure of how it wants me to reference the disk. disks: List[compute_v1.AttachedDisk]. I keep getting google.cloud.compute.v1.Instance.disks is not iterable when I try and specify the name or path.
Any help or guidance is appreciated.
import re
import sys
from typing import Any, List
import warnings
from google.api_core.extended_operation import ExtendedOperation
from google.cloud import compute_v1
def get_image_from_family(project: str, family: str) -> compute_v1.Image:
"""
Retrieve the newest image that is part of a given family in a project.
Args:
project: project ID or project number of the Cloud project you want to get image from.
family: name of the image family you want to get image from.
Returns:
An Image object.
"""
image_client = compute_v1.ImagesClient()
# List of public operating system (OS) images: https://cloud.google.com/compute/docs/images/os-details
newest_image = image_client.get_from_family(project=project, family=family)
return newest_image
def disk_from_image(
disk_type: str,
disk_size_gb: int,
boot: bool,
source_image: str,
auto_delete: bool = True,
) -> compute_v1.AttachedDisk:
"""
Create an AttachedDisk object to be used in VM instance creation. Uses an image as the
source for the new disk.
Args:
disk_type: the type of disk you want to create. This value uses the following format:
"zones/{zone}/diskTypes/(pd-standard|pd-ssd|pd-balanced|pd-extreme)".
For example: "zones/us-west3-b/diskTypes/pd-ssd"
disk_size_gb: size of the new disk in gigabytes
boot: boolean flag indicating whether this disk should be used as a boot disk of an instance
source_image: source image to use when creating this disk. You must have read access to this disk. This can be one
of the publicly available images or an image from one of your projects.
This value uses the following format: "projects/{project_name}/global/images/{image_name}"
auto_delete: boolean flag indicating whether this disk should be deleted with the VM that uses it
Returns:
AttachedDisk object configured to be created using the specified image.
"""
boot_disk = compute_v1.AttachedDisk()
initialize_params = compute_v1.AttachedDiskInitializeParams()
initialize_params.source_image = source_image
initialize_params.disk_size_gb = disk_size_gb
initialize_params.disk_type = disk_type
boot_disk.initialize_params = initialize_params
# Remember to set auto_delete to True if you want the disk to be deleted when you delete
# your VM instance.
boot_disk.auto_delete = auto_delete
boot_disk.boot = boot
return boot_disk
def wait_for_extended_operation(
operation: ExtendedOperation, verbose_name: str = "operation", timeout: int = 300
) -> Any:
"""
This method will wait for the extended (long-running) operation to
complete. If the operation is successful, it will return its result.
If the operation ends with an error, an exception will be raised.
If there were any warnings during the execution of the operation
they will be printed to sys.stderr.
Args:
operation: a long-running operation you want to wait on.
verbose_name: (optional) a more verbose name of the operation,
used only during error and warning reporting.
timeout: how long (in seconds) to wait for operation to finish.
If None, wait indefinitely.
Returns:
Whatever the operation.result() returns.
Raises:
This method will raise the exception received from `operation.exception()`
or RuntimeError if there is no exception set, but there is an `error_code`
set for the `operation`.
In case of an operation taking longer than `timeout` seconds to complete,
a `concurrent.futures.TimeoutError` will be raised.
"""
result = operation.result(timeout=timeout)
if operation.error_code:
print(
f"Error during {verbose_name}: [Code: {operation.error_code}]: {operation.error_message}",
file=sys.stderr,
flush=True,
)
print(f"Operation ID: {operation.name}", file=sys.stderr, flush=True)
raise operation.exception() or RuntimeError(operation.error_message)
if operation.warnings:
print(f"Warnings during {verbose_name}:\n", file=sys.stderr, flush=True)
for warning in operation.warnings:
print(f" - {warning.code}: {warning.message}", file=sys.stderr, flush=True)
return result
def create_instance(
project_id: str,
zone: str,
instance_name: str,
disks: List[compute_v1.AttachedDisk],
machine_type: str = "n1-standard-1",
network_link: str = "global/networks/default",
subnetwork_link: str = None,
internal_ip: str = None,
external_access: bool = False,
external_ipv4: str = None,
accelerators: List[compute_v1.AcceleratorConfig] = None,
preemptible: bool = False,
spot: bool = False,
instance_termination_action: str = "STOP",
custom_hostname: str = None,
delete_protection: bool = False,
) -> compute_v1.Instance:
"""
Send an instance creation request to the Compute Engine API and wait for it to complete.
Args:
project_id: project ID or project number of the Cloud project you want to use.
zone: name of the zone to create the instance in. For example: "us-west3-b"
instance_name: name of the new virtual machine (VM) instance.
disks: a list of compute_v1.AttachedDisk objects describing the disks
you want to attach to your new instance.
machine_type: machine type of the VM being created. This value uses the
following format: "zones/{zone}/machineTypes/{type_name}".
For example: "zones/europe-west3-c/machineTypes/f1-micro"
network_link: name of the network you want the new instance to use.
For example: "global/networks/default" represents the network
named "default", which is created automatically for each project.
subnetwork_link: name of the subnetwork you want the new instance to use.
This value uses the following format:
"regions/{region}/subnetworks/{subnetwork_name}"
internal_ip: internal IP address you want to assign to the new instance.
By default, a free address from the pool of available internal IP addresses of
used subnet will be used.
external_access: boolean flag indicating if the instance should have an external IPv4
address assigned.
external_ipv4: external IPv4 address to be assigned to this instance. If you specify
an external IP address, it must live in the same region as the zone of the instance.
This setting requires `external_access` to be set to True to work.
accelerators: a list of AcceleratorConfig objects describing the accelerators that will
be attached to the new instance.
preemptible: boolean value indicating if the new instance should be preemptible
or not. Preemptible VMs have been deprecated and you should now use Spot VMs.
spot: boolean value indicating if the new instance should be a Spot VM or not.
instance_termination_action: What action should be taken once a Spot VM is terminated.
Possible values: "STOP", "DELETE"
custom_hostname: Custom hostname of the new VM instance.
Custom hostnames must conform to RFC 1035 requirements for valid hostnames.
delete_protection: boolean value indicating if the new virtual machine should be
protected against deletion or not.
Returns:
Instance object.
"""
instance_client = compute_v1.InstancesClient()
# Use the network interface provided in the network_link argument.
network_interface = compute_v1.NetworkInterface()
network_interface.name = network_link
if subnetwork_link:
network_interface.subnetwork = subnetwork_link
if internal_ip:
network_interface.network_i_p = internal_ip
if external_access:
access = compute_v1.AccessConfig()
access.type_ = compute_v1.AccessConfig.Type.ONE_TO_ONE_NAT.name
access.name = "External NAT"
access.network_tier = access.NetworkTier.PREMIUM.name
if external_ipv4:
access.nat_i_p = external_ipv4
network_interface.access_configs = [access]
# Collect information into the Instance object.
instance = compute_v1.Instance()
instance.network_interfaces = [network_interface]
instance.name = instance_name
instance.disks = disks
if re.match(r"^zones/[a-z\d\-]+/machineTypes/[a-z\d\-]+$", machine_type):
instance.machine_type = machine_type
else:
instance.machine_type = f"zones/{zone}/machineTypes/{machine_type}"
if accelerators:
instance.guest_accelerators = accelerators
if preemptible:
# Set the preemptible setting
warnings.warn(
"Preemptible VMs are being replaced by Spot VMs.", DeprecationWarning
)
instance.scheduling = compute_v1.Scheduling()
instance.scheduling.preemptible = True
if spot:
# Set the Spot VM setting
instance.scheduling = compute_v1.Scheduling()
instance.scheduling.provisioning_model = (
compute_v1.Scheduling.ProvisioningModel.SPOT.name
)
instance.scheduling.instance_termination_action = instance_termination_action
if custom_hostname is not None:
# Set the custom hostname for the instance
instance.hostname = custom_hostname
if delete_protection:
# Set the delete protection bit
instance.deletion_protection = True
# Prepare the request to insert an instance.
request = compute_v1.InsertInstanceRequest()
request.zone = zone
request.project = project_id
request.instance_resource = instance
# Wait for the create operation to complete.
print(f"Creating the {instance_name} instance in {zone}...")
operation = instance_client.insert(request=request)
wait_for_extended_operation(operation, "instance creation")
print(f"Instance {instance_name} created.")
return instance_client.get(project=project_id, zone=zone, instance=instance_name)
def create_from_custom_image(
project_id: str, zone: str, instance_name: str, custom_image_link: str
) -> compute_v1.Instance:
"""
Create a new VM instance with custom image used as its boot disk.
Args:
project_id: project ID or project number of the Cloud project you want to use.
zone: name of the zone to create the instance in. For example: "us-west3-b"
instance_name: name of the new virtual machine (VM) instance.
custom_image_link: link to the custom image you want to use in the form of:
"projects/{project_name}/global/images/{image_name}"
Returns:
Instance object.
"""
disk_type = f"zones/{zone}/diskTypes/pd-standard"
disks = [disk_from_image(disk_type, 10, True, custom_image_link, True)]
instance = create_instance(project_id, zone, instance_name, disks)
return instance
create_instance('my_project', 'us-central1-a', 'testvm-01','?')
In terraform, it works passing the attributes directly in CDK does not work. Does anyone know how to activate the stream in the DynamoDB table?
stream_enabled = true
stream_view_type = "NEW_AND_OLD_IMAGES"
I assume you are asking how to do so in CDK, with Terraform as your background:
from aws_cdk import aws_dynamodb as dynamodb
...
my_dynamo_table = dynamodb.Table(
self, "LogicalIDForThisTable",
...
stream=dynamodb.StreamViewType.NEW_AND_OLD_IMAGES
)
In order to use said stream, you need to create an DynamoEventSource object to pass to whatever resource will be consuming the stream:
https://docs.aws.amazon.com/cdk/api/v2/python/aws_cdk.aws_lambda_event_sources/DynamoEventSource.html
ie:
from aws_cdk import aws_lambda_event_sources as event_source
...
my_dynamo_event_stream = event_source.DynamoEventSource(
my_dynamo_table,
starting_position=aws_lambda.StartingPosition.TRIM_HORIZON,
batch_size=25,
retry_attempts=10
)
my_lambda.add_event_source(my_dynamo_event_stream)
from aws_cdk import aws_dynamodb as dynamodb
...
my_dynamo_table = dynamodb.Table(
self, "LogicalIDForThisTable",
...
stream=dynamodb.StreamViewType.NEW_AND_OLD_IMAGES
)
My problen is :
stream_enabled = true
how to ?
Tanks.
I want to create Aurora PostgreSQL cluster and DB instance using CDK in python. I have gone through to the documents but unable to create it. Following is the code
import json
from constructs import Construct
from aws_cdk import (
Stack,
aws_secretsmanager as asm,
aws_ssm as ssm,
aws_rds as rds,
)
from settings import settings
class DatabaseDeploymentStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
stage_name = settings.stage
region = Stack.of(self).region
account = Stack.of(self).account
db_username = 'customdbuser' #settings.db_username
db_name = f'netsol_{stage_name}_db'
db_resource_prefix = f'netsol-{region}-{stage_name}'
print(db_resource_prefix)
is_staging: bool = stage_name == 'staging'
generated_secret_string = asm.SecretStringGenerator(
secret_string_template=json.dumps({"username": f"{db_username}"}),
exclude_punctuation=True,
include_space=False,
generate_string_key='password'
)
db_credentials_secret = asm.Secret(
self, 'db_credentials_secret',
secret_name=f'{db_resource_prefix}-credentials',
generate_secret_string=generated_secret_string
)
ssm.StringParameter(
self, 'db_credentials_arn',
parameter_name=f'{db_resource_prefix}-credentials-arn',
string_value=db_credentials_secret.secret_arn
)
scaling_configuration = rds.CfnDBCluster.ScalingConfigurationProperty(
auto_pause=True,
max_capacity=4 if is_staging else 384,
min_capacity=2,
seconds_until_auto_pause=900 if is_staging else 10800
)
db_cluster = rds.CfnDBCluster(
self, 'db_cluster',
db_cluster_identifier=f'{db_resource_prefix}-clusterabz',
engine_mode='serverless',
engine='aurora-postgresql',
engine_version='10.14',
enable_http_endpoint=True,
database_name=db_name,
master_username='abz',
master_user_password='Password123',
backup_retention_period=1 if is_staging else 30,
scaling_configuration=scaling_configuration,
deletion_protection=False if is_staging else False
)
db_cluster_arn = f'arn:aws:rds:{region}:{account}:cluster:{db_cluster.ref}'
ssm.StringParameter(
self, 'db_resource_arn',
parameter_name=f'{db_resource_prefix}-resource-arn',
string_value=db_cluster_arn
)
cfn_dBInstance = rds.CfnDBInstance(
self, "db_instance",
db_instance_class="db.t3.medium",
)
When I run this code then found following error.
"db_instance (dbinstance) Property AllocatedStorage cannot be empty"
I have gone through aws documents which says that this property is not required for amazon aurora. Moreover, I have also tried by giving this property along with other properties but still not able to create the instance
Can anyone help me to figure out the problem please?
Note:
When I run the code without db instance then cluster created successfully.
Required Output
Required output is required as per below image.
After spending some time I figured out the issue. Following are the details:
Actually code was perfect if I execute DB cluster and DB instance separately but when I execute the whole code then system was trying to crate the DB instance before creating the DB cluster. And because of that system showing error.
Problem was resolved after creating the dependency like below.
cfn_dBInstance.add_depends_on(db_cluster)
Above line ensures that DB instance will only be crated once DB cluster will be successfully created.
I am new to cdk and trying to create an instance profile with CDK+Python with the following code. I have already created the Role (gitLabRunner-glue) successfully thru CDK and wanting to use it with the intance profile. However, when i run the following code, i get an error gitLabRunner-glue already exists
Can somebody please explain what am i missing ?
from aws_cdk import core as cdk
from aws_cdk import aws_glue as glue
from aws_cdk import aws_ec2 as _ec2
from aws_cdk import aws_iam as _iam
class Ec2InstanceProfile(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
# role = _iam.Role(self, "instanceprofilerole", role_name="gitLabRunner-glue",
# assumed_by=_iam.ServicePrincipal(service='ec2.amazonaws.com'))
ec2gitLabRunnerinstanceprofile = _iam.CfnInstanceProfile(
self,
"ec2gitLabRunnerinstanceprofile",
instance_profile_name="ec2-gitLabRunner-glue",
roles=["gitLabRunner-glue"] # also tried with this[role.role_name]
)
Does your AWS account already have a role with that name in it?
the Cfn Functions in cdk represent constructs and services that have not been fully hooked into all that is CDK. As such, they often don't do things that others would - where as a CloudFormation Template for the instance profile may just hook into the existing role, the coding in the back of this cfn function may go ahead and create a role item in the template output.
if you do a cdk synth, look in your cdk.out directory, find your cloudformation template, then do a search for gitLabRunner-glue - you may find there is a AWS::IAM::ROLE being created, indicating when CloudFormation attempts to run based of the template created by cdk it tries to create a new resource and it cant.
You have a couple options to try:
As you tried, uncomment the role again and use role.role_name but name the role something else or, as CDK recommends, don't include a name and let it name it for you
search your aws account for the role and delete it
If you absolutely cannot delete the existing role or cannot create a new one with a new name, then import the role, using (based off your imports)
role = _iam.Role.from_role_arn(self, "ImportedGlueRole", role_arn="arn:aws:of:the:role", add_grants_to_resources=True)
be wary a bit of the add_grants_to_resources - if its not your role to mess with cdk can make changes if you make that true and that could cause issues elsewhere - but if its not true, then you have to update the Role itself in the aws console (or cli) to accept your resources as able to assume it.
I made it work like this, not the desired model though, but given the limitations of cdk, i couldn't find any other way.
from aws_cdk import core as cdk
from aws_cdk import aws_glue as glue
from aws_cdk import aws_ec2 as _ec2
from aws_cdk import aws_iam as _iam
class Ec2InstanceProfile(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
boundary = _iam.ManagedPolicy.from_managed_policy_arn(self, "Boundary",
"arn:aws:iam::${ACCOUNT_ID}:policy/app-perm-boundary")
# role = _iam.Role(self, "instanceprofilerole", role_name="gitLabRunner-glue",
# assumed_by=_iam.ServicePrincipal(service='ec2.amazonaws.com'))
ec2_gitlabrunner_glue = _iam.Role(
self, 'ec2-gitlabrunner-glue',
role_name='gitLabRunner-glue',
description='glue service role to be attached to the runner',
# inline_policies=[write_to_s3_policy],
assumed_by=_iam.ServicePrincipal('ec2.amazonaws.com'),
permissions_boundary=boundary
)
ec2gitLabRunnerinstanceprofile = _iam.CfnInstanceProfile(
self,
"gitLabRunnerinstanceprofile",
instance_profile_name="gitLabRunner-glue",
roles=["gitLabRunner-glue"]
)
Trying to automate the deployment of a static website using boto3. I have a static website (angular/javascript/html) sitting in a bucket, and need to use the aws cloudfront CDN.
Anyway, looks like making the s3 bucket and copying in the html/js is working fine.
import boto3
cf = boto3.client('cloudfront')
cf.create_distribution(DistributionConfig=dict(CallerReference='firstOne',
Aliases = dict(Quantity=1, Items=['mydomain.com']),
DefaultRootObject='index.html',
Comment='Test distribution',
Enabled=True,
Origins = dict(
Quantity = 1,
Items = [dict(
Id = '1',
DomainName='mydomain.com.s3.amazonaws.com')
]),
DefaultCacheBehavior = dict(
TargetOriginId = '1',
ViewerProtocolPolicy= 'redirect-to-https',
TrustedSigners = dict(Quantity=0, Enabled=False),
ForwardedValues=dict(
Cookies = {'Forward':'all'},
Headers = dict(Quantity=0),
QueryString=False,
QueryStringCacheKeys= dict(Quantity=0),
),
MinTTL=1000)
)
)
When I try to create the cloudfront distribution, I get the following error:
InvalidOrigin: An error occurred (InvalidOrigin) when calling the CreateDistribution operation: The specified origin server does not exist or is not valid.
An error occurred (InvalidOrigin) when calling the CreateDistribution operation: The specified origin server does not exist or is not valid.
Interestingly, it looks to be complaining about the origin, mydomain.com.s3.amazonaws.com, however when I create a distribution for the s3 bucket in the web console, it has no problem with the same origin domain name.
Update:
I can get this to work with boto with the following, but would rather use boto3:
import boto
c = boto.connect_cloudfront()
origin = boto.cloudfront.origin.S3Origin('mydomain.com.s3.amazonaws.com')
distro = c.create_distribution(origin=origin, enabled=False, comment='My new Distribution')
Turns out their is a required parameter that is not documented properly.
Since the Origin is a S3 bucket, you must have S3OriginConfig = dict(OriginAccessIdentity = '') defined even if OriginAccessIdentity not used, and is an empty string.
The following command works. Note, you still need a bucket policy to make the objects accessible, and a route53 entry to alias the cname we want to cloudfront generated hostname.
cf.create_distribution(DistributionConfig=dict(CallerReference='firstOne',
Aliases = dict(Quantity=1, Items=['mydomain.com']),
DefaultRootObject='index.html',
Comment='Test distribution',
Enabled=True,
Origins = dict(
Quantity = 1,
Items = [dict(
Id = '1',
DomainName='mydomain.com.s3.amazonaws.com',
S3OriginConfig = dict(OriginAccessIdentity = ''))
]),
DefaultCacheBehavior = dict(
TargetOriginId = '1',
ViewerProtocolPolicy= 'redirect-to-https',
TrustedSigners = dict(Quantity=0, Enabled=False),
ForwardedValues=dict(
Cookies = {'Forward':'all'},
Headers = dict(Quantity=0),
QueryString=False,
QueryStringCacheKeys= dict(Quantity=0),
),
MinTTL=1000)
)
)