I am new to cdk and trying to create an instance profile with CDK+Python with the following code. I have already created the Role (gitLabRunner-glue) successfully thru CDK and wanting to use it with the intance profile. However, when i run the following code, i get an error gitLabRunner-glue already exists
Can somebody please explain what am i missing ?
from aws_cdk import core as cdk
from aws_cdk import aws_glue as glue
from aws_cdk import aws_ec2 as _ec2
from aws_cdk import aws_iam as _iam
class Ec2InstanceProfile(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
# role = _iam.Role(self, "instanceprofilerole", role_name="gitLabRunner-glue",
# assumed_by=_iam.ServicePrincipal(service='ec2.amazonaws.com'))
ec2gitLabRunnerinstanceprofile = _iam.CfnInstanceProfile(
self,
"ec2gitLabRunnerinstanceprofile",
instance_profile_name="ec2-gitLabRunner-glue",
roles=["gitLabRunner-glue"] # also tried with this[role.role_name]
)
Does your AWS account already have a role with that name in it?
the Cfn Functions in cdk represent constructs and services that have not been fully hooked into all that is CDK. As such, they often don't do things that others would - where as a CloudFormation Template for the instance profile may just hook into the existing role, the coding in the back of this cfn function may go ahead and create a role item in the template output.
if you do a cdk synth, look in your cdk.out directory, find your cloudformation template, then do a search for gitLabRunner-glue - you may find there is a AWS::IAM::ROLE being created, indicating when CloudFormation attempts to run based of the template created by cdk it tries to create a new resource and it cant.
You have a couple options to try:
As you tried, uncomment the role again and use role.role_name but name the role something else or, as CDK recommends, don't include a name and let it name it for you
search your aws account for the role and delete it
If you absolutely cannot delete the existing role or cannot create a new one with a new name, then import the role, using (based off your imports)
role = _iam.Role.from_role_arn(self, "ImportedGlueRole", role_arn="arn:aws:of:the:role", add_grants_to_resources=True)
be wary a bit of the add_grants_to_resources - if its not your role to mess with cdk can make changes if you make that true and that could cause issues elsewhere - but if its not true, then you have to update the Role itself in the aws console (or cli) to accept your resources as able to assume it.
I made it work like this, not the desired model though, but given the limitations of cdk, i couldn't find any other way.
from aws_cdk import core as cdk
from aws_cdk import aws_glue as glue
from aws_cdk import aws_ec2 as _ec2
from aws_cdk import aws_iam as _iam
class Ec2InstanceProfile(cdk.Stack):
def __init__(self, scope: cdk.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
boundary = _iam.ManagedPolicy.from_managed_policy_arn(self, "Boundary",
"arn:aws:iam::${ACCOUNT_ID}:policy/app-perm-boundary")
# role = _iam.Role(self, "instanceprofilerole", role_name="gitLabRunner-glue",
# assumed_by=_iam.ServicePrincipal(service='ec2.amazonaws.com'))
ec2_gitlabrunner_glue = _iam.Role(
self, 'ec2-gitlabrunner-glue',
role_name='gitLabRunner-glue',
description='glue service role to be attached to the runner',
# inline_policies=[write_to_s3_policy],
assumed_by=_iam.ServicePrincipal('ec2.amazonaws.com'),
permissions_boundary=boundary
)
ec2gitLabRunnerinstanceprofile = _iam.CfnInstanceProfile(
self,
"gitLabRunnerinstanceprofile",
instance_profile_name="gitLabRunner-glue",
roles=["gitLabRunner-glue"]
)
Related
I am trying to add a CDK Aspect to this AWS workshop
https://github.com/aws-samples/aws-cdk-intro-workshop/tree/master/code/python/pipelines-workshop
My Aspect would prefix "my-custom-prefix-" to all role and policy names (org requirement don't ask why). And also adds a permission boundary to all roles.
#jsii.implements(IAspect)
class PermissionBoundaryAspect:
def __init__(self, permission_boundary: Union[_iam.ManagedPolicy, str]) -> None:
self.permission_boundary = permission_boundary
def visit(self, construct_ref: IConstruct) -> None:
if isinstance(scope, iam.Role):
iam_role = scope.node.find_child('Resource')
iam_role.add_property_override(property_path='PermissionsBoundary', value=self.permission_boundary)
if iam_role.role_name.startswith("my-custom-prefix-"):
pass
else:
newrolename = f"{iam_prefix}{iam_role.role_name}"
iam_role.add_property_override(property_path='RoleName', value=newrolename)
and I use it in my app.py
app = cdk.App()
WorkshopPipelineStack(app, "WorkshopPipelineStack")
cftdeveloperboundary = f"arn:aws:iam::{t_account_id}:policy/my-boundary-policy"
cdk.Aspects.of(app).add(PermissionBoundaryAspect(cftdeveloperboundary))
app.synth()
Next I do cdk synth and in the CloudFormation template see the boundary is attached to roles and policy names are prefixed but not role names. The roles names are all RoleName: my-custom-prefix- whereas policy names are properly prefixed e.g. PolicyName: my-custom-prefix-PipelineRoleDefaultPolicy7BDC1ABB
Using this as a reference: https://airflow.apache.org/docs/apache-airflow/stable/howto/define_extra_link.html
I can not get links to show in the UI. I have tried adding the link within the operator itself and building the separate extra_link.py file to add it and the link doesn't show up when looking at the task in graph or grid view. Here is my code for creating it in the operator:
class upstream_link(BaseOperatorLink):
"""Create a link to the upstream task"""
name = "Test Link"
def get_link(self, operator, *, ti_key):
return "https://www.google.com"
# Defining the plugin class
class AirflowExtraLinkPlugin(AirflowPlugin):
name = "integration_links"
operator_extra_links = [
upstream_link(),
]
class BaseOperator(BaseOperator, SkipMixin, ABC):
""" Base Operator for all integrations """
operator_extra_links = (upstream_link(),)
This is a custom BaseOperator class used by a few operators in my deployment. I don’t know if the inheritance is causing the issue or not. Any help would be greatly appreciated.
Also, the goal is to have this on mapped tasks, this does work with mapped tasks right?
Edit: Here is the code I used when i tried the stand alone file approach in the plugins folder:
from airflow.models.baseoperator import BaseOperatorLink
from plugins.operators.integrations.base_operator import BaseOperator
from airflow.plugins_manager import AirflowPlugin
class upstream_link(BaseOperatorLink):
"""Create a link to the upstream task"""
name = "Upstream Data"
operators = [BaseOperator]
def get_link(self, operator, *, ti_key):
return "https://www.google.com"
# Defining the plugin class
class AirflowExtraLinkPlugin(AirflowPlugin):
name = "extra_link_plugin"
operator_extra_links = [
upstream_link(),
]
The custom plugins should be defined in the plugins folder (by default $AIRFLOW_HOME/plugins) to be processed by the plugin manager.
Try to create a new script in the plugins folder, and move AirflowExtraLinkPlugin class to this script, it should work.
The issue turned out to be the inheritance. Attaching the extra link does not carry through to the children as it seems that Airflow is just looking for that specific operator name. Extra Links also do not seem to work with mapped tasks.
I want to create Aurora PostgreSQL cluster and DB instance using CDK in python. I have gone through to the documents but unable to create it. Following is the code
import json
from constructs import Construct
from aws_cdk import (
Stack,
aws_secretsmanager as asm,
aws_ssm as ssm,
aws_rds as rds,
)
from settings import settings
class DatabaseDeploymentStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
stage_name = settings.stage
region = Stack.of(self).region
account = Stack.of(self).account
db_username = 'customdbuser' #settings.db_username
db_name = f'netsol_{stage_name}_db'
db_resource_prefix = f'netsol-{region}-{stage_name}'
print(db_resource_prefix)
is_staging: bool = stage_name == 'staging'
generated_secret_string = asm.SecretStringGenerator(
secret_string_template=json.dumps({"username": f"{db_username}"}),
exclude_punctuation=True,
include_space=False,
generate_string_key='password'
)
db_credentials_secret = asm.Secret(
self, 'db_credentials_secret',
secret_name=f'{db_resource_prefix}-credentials',
generate_secret_string=generated_secret_string
)
ssm.StringParameter(
self, 'db_credentials_arn',
parameter_name=f'{db_resource_prefix}-credentials-arn',
string_value=db_credentials_secret.secret_arn
)
scaling_configuration = rds.CfnDBCluster.ScalingConfigurationProperty(
auto_pause=True,
max_capacity=4 if is_staging else 384,
min_capacity=2,
seconds_until_auto_pause=900 if is_staging else 10800
)
db_cluster = rds.CfnDBCluster(
self, 'db_cluster',
db_cluster_identifier=f'{db_resource_prefix}-clusterabz',
engine_mode='serverless',
engine='aurora-postgresql',
engine_version='10.14',
enable_http_endpoint=True,
database_name=db_name,
master_username='abz',
master_user_password='Password123',
backup_retention_period=1 if is_staging else 30,
scaling_configuration=scaling_configuration,
deletion_protection=False if is_staging else False
)
db_cluster_arn = f'arn:aws:rds:{region}:{account}:cluster:{db_cluster.ref}'
ssm.StringParameter(
self, 'db_resource_arn',
parameter_name=f'{db_resource_prefix}-resource-arn',
string_value=db_cluster_arn
)
cfn_dBInstance = rds.CfnDBInstance(
self, "db_instance",
db_instance_class="db.t3.medium",
)
When I run this code then found following error.
"db_instance (dbinstance) Property AllocatedStorage cannot be empty"
I have gone through aws documents which says that this property is not required for amazon aurora. Moreover, I have also tried by giving this property along with other properties but still not able to create the instance
Can anyone help me to figure out the problem please?
Note:
When I run the code without db instance then cluster created successfully.
Required Output
Required output is required as per below image.
After spending some time I figured out the issue. Following are the details:
Actually code was perfect if I execute DB cluster and DB instance separately but when I execute the whole code then system was trying to crate the DB instance before creating the DB cluster. And because of that system showing error.
Problem was resolved after creating the dependency like below.
cfn_dBInstance.add_depends_on(db_cluster)
Above line ensures that DB instance will only be crated once DB cluster will be successfully created.
New to AWS CDK and I'm trying to create a load balanced fargate service with the construct ApplicationLoadBalancedFargateService.
I have an existing image on ECR that I would like to reference and use. I've found the ecs.ContainerImage.from_ecr_repository function, which I believe is what I should use in this case. However, this function takes an IRepository as a parameter and I cannot find anything under aws_ecr.IRepository or aws_ecr.Repository to reference a pre-existing image. These constructs all seem to be for making a new repository.
Anyone know what I should be using to get the IRepository object for an existing repo? Is this just not typically done this way?
Code is below. Thanks in Advance.
from aws_cdk import (
# Duration,
Stack,
# aws_sqs as sqs,
)
from constructs import Construct
from aws_cdk import (aws_ec2 as ec2, aws_ecs as ecs,
aws_ecs_patterns as ecs_patterns,
aws_route53,aws_certificatemanager,
aws_ecr)
class NewStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
_repo = aws_ecr.Repository(self, 'id1', repository_uri = repo_uri)
vpc = ec2.Vpc(self, "applications", max_azs=3) # default is all AZs in region
cluster = ecs.Cluster(self, "id2", vpc=vpc)
hosted_zone = aws_route53.HostedZone.from_lookup(self,
'id3',
domain_name = 'domain'
)
certificate = aws_certificatemanager.Certificate.from_certificate_arn(self,
id4,
'cert_arn'
)
image = ecs.ContainerImage.from_ecr_repository(self, _repo)
ecs_patterns.ApplicationLoadBalancedFargateService(self, "id5",
cluster=cluster, # Required
cpu=512, # Default is 256
desired_count=2, # Default is 1
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image = image,
container_port=8000),
memory_limit_mib=2048, # Default is 512
public_load_balancer=True,
domain_name = 'domain_name',
domain_zone = hosted_zone,
certificate = certificate,
redirect_http = True)
You are looking for from_repository_attributes() to create an instance of IRepository from an existing ECR repository.
I would like to upload a docker image from local disc to a repository that I created with the AWS CDK.
When i use aws_ecr_assets.DockerImageAsset to add a Docker image to a repository (which works fine except that I am not able to set permissions on its repository via CDK), I get the following deprecation warning:
DockerImageAsset.repositoryName is deprecated. Override "core.Stack.addDockerImageAsset" to control asset locations
When looking into core.Stack.addDockerImageAsset, I get a hint that I should override stack.synthesizer.addDockerImageAsset().
My simplified stack with a custom synthesizer looks like this:
class CustomSynthesizer(core.LegacyStackSynthesizer):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def add_docker_image_asset(
self,
*,
directory_name,
source_hash,
docker_build_args=None,
docker_build_target=None,
docker_file=None,
repository_name=None,
):
# What should I put in this function to upload the docker image into my repository?
class ContainerStack(core.Stack):
def __init__(self, scope: core.Construct, id: str, **kwargs) -> None:
super().__init__(scope, id, synthesizer=CustomSynthesizer(), **kwargs)
repo = ecr.Repository(self, "TestRepo", repository_name="test_repo")
repo.grant_pull(iam.AccountPrincipal("123456789123"))
image_location = self.synthesizer.add_docker_image_asset(
directory_name="path/to/dir/with/Dockerfile",
source_hash="latest",
repository_name=repo.repository_name,
)
Another thing that I tried is to use the standard stack synthesizer and calling add_docker_image_asset on it. Unfortunately, I get the following error message and the stack fails to deploy:
test-stack: deploying...
[0%] start: Publishing latest:current
[25%] fail: Unable to execute 'docker' in order to build a container asset. Please install 'docker' and try again.
...
❌ test-stack failed: Error: Failed to publish one or more assets. See the error messages above for more information.
at Object.publishAssets (/home/user/.npm-global/lib/node_modules/aws-cdk/lib/util/asset-publishing.ts:25:11)
at processTicksAndRejections (internal/process/task_queues.js:97:5)
at Object.deployStack (/home/user/.npm-global/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:216:3)
at CdkToolkit.deploy (/home/user/.npm-global/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:181:24)
at main (/home/user/.npm-global/lib/node_modules/aws-cdk/bin/cdk.ts:268:16)
at initCommandLine (/home/user/.npm-global/lib/node_modules/aws-cdk/bin/cdk.ts:188:9)
Failed to publish one or more assets. See the error messages above for more information.
I'm flat on my back as to how to solve this problem and any help is much appreciated!
DockerImageAsset is a managed construct, which handles the repository and versioning by itself. By default it creates a repository for your stack and uploads your docker images to it (tagging them by their hash).
You do not need to create this repository yourself. However, if you are like me and want to have a legible name for the repository, you can name the repo using cdk.json config file and its context section:
// cdk.json
{
"app": "python3 your_app",
"context": {
"#aws-cdk/core:enableStackNameDuplicates": "true",
"aws-cdk:enableDiffNoFail": "true",
"assets-ecr-repository-name": "<REPO NAME GOES HERE>"
}
}
If you want to further alter the repo (I like to leave it fully managed, but hey), you can load the repo into the CDK stack by using one of the static methods on the aws_ecr.Repository construct.
https://docs.aws.amazon.com/cdk/api/latest/docs/#aws-cdk_aws-ecr.Repository.html#static-from-wbr-repository-wbr-namescope-id-repositoryname
Hope this helps a little :-)