I am trying to set up notification to my codepipeline in aws.
Been following this https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_codestarnotifications/README.html
pipeline = CodePipeline(
self,
id,
pipeline_name=id,
synth=synth_step,
cross_account_keys=True,
code_build_defaults=pipelines.CodeBuildOptions(
build_environment=BuildEnvironment(
build_image=aws_codebuild.LinuxBuildImage.STANDARD_5_0,
privileged=True,
)
),
)
after creating my code pipeline with in the stack i am creating a notification rule.
rule = aws_codestarnotifications.NotificationRule(self, "NotificationRule",
source=pipeline,
events=["codepipeline-pipeline-pipeline-execution-failed", "codepipeline-pipeline-pipeline-execution-succeeded"
],
targets=[sns_topic]
)
but i am getting RuntimeError: props.source.bindAsNotificationRuleSource is not a function.
I also tried solution mentioned here, but didn't workout.
https://github.com/aws/aws-cdk/issues/9710
Does anyone has an idea on it? where am i going wrong?
The issue was with
source=pipeline,
Here the source in notification rule is expecting an pipeline ARN.
As we are working with CDK we need to make sure the pipeline is built before we set up the notification stack.
To over come this we need to build the pipeline and then set up the notification rule.
Use pipeline.build_pipeline() after pipeline and before notification rule code.
This worked for me.
also please refer active thread for detailed explanation.
Related
I try to get some parameter from AWS System Manager for ECS Fargate Containers, but I get some problems. My code is:
secret_value = ssm.StringParameter.from_secure_string_parameter_attributes(
self,
"/spark/ssh_pub",
parameter_name="/spark/ssh_pub",
version=1
)
container_sp = fargate_task_definition_sp.add_container(
"pod-spark-master",
image=ecs.ContainerImage.from_registry(
"xxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com/spark-master:ready-for-test-deployment"),
health_check=health_check_sp,
logging=log_config_sp,
secrets={
"SPARK_PUB": ecs.Secret.from_ssm_parameter(secret_value)
}
)
Then I get this error:
jsii.errors.JSIIError: There is already a Construct with name '--spark--ssh_pub' in Stack [sandbox]
has someone any idea?
There are few possibilities.
CDK Bug
CDK has too many issues. Similar error was reported https://github.com/aws/aws-cdk/issues/8603. So it can be a CDK bug. In this case, all we can do is raise an issue in the Github and hope they will fix which may not happen soon with having 1000+ issues reported and open.
There are actually a few CDK constructs (AWS resources) to which the same name have been given. Search through your stack "sandbox" and make sure no duplicate name will be created. If the same construct can be created more than once and the name can be the same.
There is already a Construct with name '--spark--ssh_pub' in Stack [sandbox].
Please also make sure this is actually what you need.
image=ecs.ContainerImage.from_registry(
"xxxxxxxx.dkr.ecr.eu-central-1.amazonaws.com...
Apparently the docker image is in your ECR. Then from_ecr_repository should be the one to use. AWS documentations are confusing and sometime incorrect. The from_registry is not to pull the images from ECR but from the DockerHub, etc.
Few days back I asked a question to stack overflow community at about custom construct library Question
Although I didn't got the exact answer I was looking for. But somehow I managed to create a custom construct library. But now I have another query on how to host the CDK App as an API.
Below is the snapshot of custom construct library:
test_ec2 = ec2.Instance(self, "APIInstance",
vpc=my_vpc,
machine_image=ec2.AmazonLinuxImage(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX
),
key_name="test-cdk",
instance_name=inst-name
)
If I want to host above AWS CDK Application as an API that will accept a string for the variable inst-name and will create a EC2 instance. I tried creating it as an lambda function but not sure how to manage "node dependencies and python dependencies" at same time.
Can it be done using an already created EC2 Instance(attaching an IAM Role with permission of cloudformation) and accepting HTTP requests (But don't know how?) Is this making sense?
Thank You in advance to all the devs.
There are many ways to do this. I think the simplest would be to syntesize your cloudformation templates and publish them to S3 ahead of time and use API gateway with a REST API and with AWS request type integration that would create the Cloudformation stack.
Here's a tutorial that explains how to build a REST API with AWS API integration: https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-aws-proxy.html
Just this is for SNS:ListTopics action, but you would want cloudformation:CreateStack action instead.
We are attempting to use AWS Xray to trace an event through multiple services. We have enabled Xray within lambda via the checkbox and added the python (v2) SDK. This is giving us good information for each lambda, but they are not connected. Here is our model:
event hits SNS
Lambda is triggered for preprocessing writes to SQS
event in SQS
Another Lambda picks up event and processes, writes to another SNS
We can see the python libraries which python is calling by doing the patch_all().
I was hoping to see connectivity end-to-end but I don't know how to associate these components. Right now we see the Lambdas as independent pieces and nothing for SQS.
Currently X-Ray does not support above use-case and actively working on this. Currently we cannot share ETA on when this will be available.
For more details please see
https://forums.aws.amazon.com/thread.jspa?messageID=873142󕊶
Although as Rusty rightly says it is not currently officially supported, you can work around this yourself by creating a new AWS-Xray segment inside the Lambda Function and using the incoming TraceID from the SQS message. This will result in two Segments for your lambda invocation. One which Lambda itself creates, and one which you create to extend the existing trace. Whether that's acceptable or not for your use case is something you'll have to decide for yourself!
If you're working with Python you can do it with aws-xray-lambda-segment-shim.
If you're working with NodeJS you can follow this guide on dev.to.
If you're working with .NET there are some examples on this GitHub issue.
My goal is to setup a buildbot that listens out for webhooks from a github server and then builds the repo listed in the webhook via a generic make all command.
The issue I'm having is that it appears in the build steps I need to specify the github repo in advance i.e.
factory.addStep(
steps.GitHub(
repourl= "github.<domain>.com/<user>/<repo>/",
mode='full',
method='clobber'
)
)
Where as ideally I'd want to be able to grab the repo url from the http request (obviously validating it before blindly running code) and then check it out. Something like:
factory.addStep(
steps.GitHub(
repourl= request["repo_url"],
mode='full',
method='clobber'
)
)
Is this possible in the buildbot framework? Any tips or additional documentation to look at would be greatly appreciated!!
Just encase anyone else comes across this I found two potential solutions. First there is an undocumented option in the webhook which allows to add all of the HTTP request information into the properties object:
'www' : {
...
"change_hook_dialects" : {'github': {"github_property_whitelist": "*"}},
...
}
This then gives you access to all the http request info in the scheduler / builder stages. You can then also grab property information in the build_steps stage using the util properties i.e.
factory.addStep(
steps.GitHub(
repourl= util.Property('repository'),
mode='full',
method='clobber',
submodules=True
)
)
I am running a Spark step on AWS EMR, this step is added to EMR through Boto3, I will like to return to the user a percentage of completion of the task, is there anyway to do this?
I was thinking to calculate this percentage with the number of completed stages of Spark, I know this won't be too precise, as the stage 4 may take double time than stage 5 but I am fine with that.
Is it possible to access this information with boto3?
I checked the method list_steps (here are the docs) but in the response I am getting only if its running without other information.
DISCLAIMER: I know nothing about AWS EMR and Boto3
I will like to return to the user a percentage of completion of the task, is there anyway to do this?
Any way? Perhaps. Just register a SparkListener and intercept events as they come. That's how web UI works under the covers (which is the definitive source of truth for Spark applications).
Use spark.extraListeners property to register a SparkListener and do whatever you want with the events.
Quoting the official documentation's Application Properties:
spark.extraListeners A comma-separated list of classes that implement SparkListener; when initializing SparkContext, instances of these classes will be created and registered with Spark's listener bus. If a class has a single-argument constructor that accepts a SparkConf, that constructor will be called; otherwise, a zero-argument constructor will be called.
You could also consider REST API interface:
In addition to viewing the metrics in the UI, they are also available as JSON. This gives developers an easy way to create new visualizations and monitoring tools for Spark. The JSON is available for both running applications, and in the history server. The endpoints are mounted at /api/v1. Eg., for the history server, they would typically be accessible at http://:18080/api/v1, and for a running application, at http://localhost:4040/api/v1.
This is not supported at the moment and I don't think it will be anytime soon.
You'll just have to follow application logs the old fashioned way. So maybe consider formatting your logs in a way you know what has actually finished.