I'm trying to create a documentDB along with db-instance. Both functions are in the same stack class, But when i try to run the code, the instance and db cluster start creating parallel and throws an error that cluster_name not found for instance creation.
I want to know is there any method like dependsOn in aws cdk.
something like this is js: dbInstance.addDependsOn(dbCluster);
In AWS CDK Python, we can create a dependency between resources using the add_depends_on method. This method allows you to specify the dependencies between resources so that AWS CDK can create or update them in the correct order.
Something like this:
bucket = s3.Bucket(self, "MyBucket")
queue = sqs.Queue(self, "MyQueue")
# Add a dependency from the SQS queue to the S3 bucket
queue.add_depends_on(bucket)
Related
I'm having some trouble in trying to understand how to pass an output of a resource as an input to another resource, so they have a dependency and the order at the creation time works properly.
Scenario:
Resource B has a dependency from Resource A.
I was trying to pass to resource B something like these
opts = ResourceOptions(depends_on=[ResourceA])
But for some reason, it acts as that parameter wasn't there and keeps creating Resource B before creating Resource A, therefore throwing an error.
If I execute pulumi up a second time, as Resource A exists, Resource B gets created.
I noticed that you could also pass an output as an input of another resource, and because of this, Pulumi understands that there is a relationship and makes it so automatically
https://www.pulumi.com/docs/intro/concepts/inputs-outputs/
But I can't get my head around it in how to pass that, so, any help regarding this would be appreciate it.
I also used the following explanation regarding how to use ResourceOptions, which I think that I'm using it correctly as the code above, but still no case
How to control resource creation order in Pulumi
Thanks in advance.
#mrthopson,
Let me try to explain using one of the public examples. I took it from this Pulumi example:
https://github.com/pulumi/examples/blob/master/aws-ts-eks/index.ts
// Create a VPC for our cluster.
const vpc = new awsx.ec2.Vpc("vpc", { numberOfAvailabilityZones: 2 });
// Create the EKS cluster itself and a deployment of the Kubernetes dashboard.
const cluster = new eks.Cluster("cluster", {
vpcId: vpc.id,
subnetIds: vpc.publicSubnetIds,
instanceType: "t2.medium",
desiredCapacity: 2,
minSize: 1,
maxSize: 2,
});
The example first creates a VPC in AWS. The VPC contains a number of different networks and the identifiers of these networks are exposed as outputs. When we create the EKS cluster, we pass the ids of the public subnets (output vpc.publicSubnetIds) as an input to the cluster (input: subnetIds).
That is the only thing you need to do to have a dependency from the EKS cluster on the VPC. When running Pulumi, the engine will find out it first needs to create the VPC and only after that it can create the EKS cluster.
Ringo
I'm using python scripts to create and manage data factory pipelines, when I want to create a linked service, I'm just using this code:
https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-python#create-a-linked-service
but now I want to create the linked service using managed identity and not by name and key, and I can't find any example of how to do it with python.
I managed to do it manually like so:
but I want to do it using python.
thanks!
service_endpoint str Required Blob service endpoint of the Azure
Blob Storage resource. It is mutually exclusive with connectionString,
sasUri property.
According to the API documentation, you should use service_endpoint to create linked service with Managed identity. You should pass Blob service endpoint to service_endpoint.
The following is my test code:
ls_name = 'storageLinkedService001'
endpoint_string = 'https://<account name>.blob.core.windows.net'
ls_azure_storage = LinkedServiceResource(properties=AzureBlobStorageLinkedService(service_endpoint=endpoint_string))
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
Result:
How can we retrieve system information in a newly deployed/provisioned linux EC2 instance using CDK and python in a Lambda function?
I'd like to know if it's possible to pull an environment variable or variables that is also defined in /etc/environment.d/servervars.env.
I'd like the values to become available inside my Lambda function. My current Lambda function knows the instance id.
Since the information is static and is added during the provisioning of the instances, you could add a line to the provisioning script:
MY_ID=`curl http://169.254.169.254/latest/meta-data/instance-id --silent`
APPLICATION=payroll
aws ec2 create-tags --resources $MY_ID --tags Key=Application,Value=$APPLICATION
The AWS CLI requires AWS credentials to create the tags. This can be done by assigning an IAM Role to the instance with the ec2:CreateTags permission.
I was building a Python web-app with AWS Elastic Beanstalk, and I was wondering if it's necessary to need to create a .ebextensions/xyz.config file to use resources like DynamoDB, SNS, etc
here is a sample code using boto3 and I was able to connect from my web-app and put data into the table without defining any configuration files ...
db = boto3.resource('dynamodb', region_name='us-east-1')
table = db.Table('StudentInfo')
appreciate your inputs
You do not need .ebextensions to create a DynamoDB to work with Beanstalk. However, you can, as described here. This example uses the CloudFormation template syntax to specify a DynamoDB resource. If not in a .ebextensions file, you'd create the DynamoDB through an AWS SDK/Dynamo DB console and make the endpoint available to your Django application.
You can specify an SNS topic for Beanstalk to use to publish events to or as in the above DynamoDB example, create one as a CFN resource. The difference between the two approaches is that, whereas in the former, the Beanstalk environment owns the SNS topic, in the latter, it is the underlying CloudFormation stack that does. If you want to use the SNS topic for things other than to publish environment health events to, you would use the latter approach. For example, to integrate the SNS topic with DynamoDB, you must use the latter approach (i.e. , specify it as a resource in a ebextensions file, rather than as an option setting).
You would need to switch to using IAM roles. Read more here.
I am assuming that you didn't change the default role that gets assigned to the Elastic Beanstalk (EB) instance during creation. The default instance profile role allows EB to utilize other AWS services it needs to create the various components.
Until you understand more about IAM, creating roles, and assigning permissions you can attach AWS managed permissions to this role to test your application (just search for Dynamo and SNS).
Like the title said, I would like to register fresh EC2 with OpsWorks stack. Problem is, the command "register" can only be run from CLI (shell script) but not from a Lambda function (Python, Java, or JS). Is there any work-around to do this?
Take a look at this: register_instance for Boto3/OpsWork. There are 2 parts in registering the instance and Boto3 can do the second part only.
We do not recommend using this action to register instances. The
complete registration operation has two primary steps, installing the
AWS OpsWorks agent on the instance and registering the instance with
the stack. RegisterInstance handles only the second step. You should
instead use the AWS CLI register command, which performs the entire
registration operation. For more information, see Registering an
Instance with an AWS OpsWorks Stack
To run the CLI in your Lambda function, make sure your Lambda Exec Role has the privileges to execute the OpsWork CLI and call some thing like this in your python Lambda:
import subprocess
subprocess.call(["aws", "--region", "us-east-1", "opsworks", "register-instance", "--stack-id", "<stack-id>"])
Look at OpsWorks CLI for more info.