creating Azure Data factory linked service using Managed identity with python - python

I'm using python scripts to create and manage data factory pipelines, when I want to create a linked service, I'm just using this code:
https://learn.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-python#create-a-linked-service
but now I want to create the linked service using managed identity and not by name and key, and I can't find any example of how to do it with python.
I managed to do it manually like so:
but I want to do it using python.
thanks!

service_endpoint str Required Blob service endpoint of the Azure
Blob Storage resource. It is mutually exclusive with connectionString,
sasUri property.
According to the API documentation, you should use service_endpoint to create linked service with Managed identity. You should pass Blob service endpoint to service_endpoint.
The following is my test code:
ls_name = 'storageLinkedService001'
endpoint_string = 'https://<account name>.blob.core.windows.net'
ls_azure_storage = LinkedServiceResource(properties=AzureBlobStorageLinkedService(service_endpoint=endpoint_string))
ls = adf_client.linked_services.create_or_update(rg_name, df_name, ls_name, ls_azure_storage)
Result:

Related

How to do DependsOn in aws-cdk-python

I'm trying to create a documentDB along with db-instance. Both functions are in the same stack class, But when i try to run the code, the instance and db cluster start creating parallel and throws an error that cluster_name not found for instance creation.
I want to know is there any method like dependsOn in aws cdk.
something like this is js: dbInstance.addDependsOn(dbCluster);
In AWS CDK Python, we can create a dependency between resources using the add_depends_on method. This method allows you to specify the dependencies between resources so that AWS CDK can create or update them in the correct order.
Something like this:
bucket = s3.Bucket(self, "MyBucket")
queue = sqs.Queue(self, "MyQueue")
# Add a dependency from the SQS queue to the S3 bucket
queue.add_depends_on(bucket)

is this possible to extend api-gateway api's using python cdk

is this possible to get a api gateway using it's arn and then add more endpoints to it ? for example I have a api gateway and it's root path is '/path-one' and more api's are attached to it like '/path-one/one' etc not I want to get this api gateway using it's
rest_api_root_resource_id and then add new api like '/path-one/two' to same path . is this possible ? how i can achieve using python cdk
FOR Example as the same way we access lambda function using arn
self.my_lambda = _lambda.Function.from_function_arn(self, "my-lambda", my_lambda_arn)
in same type of thing for api gateway I am finding. any help would be highly appericiated.

Google cloud get bucket - works with cli but not in python

I was asked to preform integration with an external google storage bucket, I had received a credentials json,
And while trying to do
gsutil ls gs://bucket_name (after configuring myself with the creds json) I had received a valid response, as well as when I tried to upload a file into the bucket.
When trying to do it with Python3, it does not work:
While using google-cloud-storage==1.16.0 (tried also the newer versions), I'm doing:
project_id = credentials_dict.get("project_id")
credentials = service_account.Credentials.from_service_account_info(credentials_dict)
client = storage.Client(credentials=credentials, project=project_id)
bucket = client.get_bucket(bucket_name)
But on the get_bucket line, I get:
google.api_core.exceptions.Forbidden: 403 GET https://www.googleapis.com/storage/v1/b/BUCKET_NAME?projection=noAcl: USERNAME#PROJECT_ID.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket.
The external partner which I'm integrating with, saying that the user is set correctly, and to prove it they're showing that I can preform the action with gsutil.
Can you please assist? Any idea what might be the problem?
The answer was that the creds were indeed wrong, but it did worked when I tried to preform on the client client.bucket(bucket_name) instead of client.get_bucket(bucket_name).
Please follow these steps in order to correctly set up the Cloud Storage Client Library for Python. In general, the Cloud Storage Libraries can use Application default credentials or environment variables for authentication.
Notice that the recommended method to use would be to set up authentication using environment variables (i.e if you are using Linux: export GOOGLE_APPLICATION_CREDENTIALS="/path/to/[service-account-credentials].json" should work) and avoid the use of the service_account.Credentials.from_service_account_info() method altogether:
from google.cloud import storage
storage_client = storage.Client(project='project-id-where-the-bucket-is')
bucket_name = "your-bucket"
bucket = client.get_bucket(bucket_name)
should simply work because the authentication is handled by the client library via the environment variable.
Now, if you are interested in explicitly using the service account instead of using service_account.Credentials.from_service_account_info() method you can use the from_service_account_json() method directly in the following way:
from google.cloud import storage
# Explicitly use service account credentials by specifying the private key
# file.
storage_client = storage.Client.from_service_account_json(
'/[service-account-credentials].json')
bucket_name = "your-bucket"
bucket = client.get_bucket(bucket_name)
Find all the relevant details as to how to provide credentials to your application here.
tl;dr: dont use client.get_bucket at all.
See for detailed explanation and solution https://stackoverflow.com/a/51452170/705745

Do i need .ebextensions to use AWS resources like DynamoDB or SNS?

I was building a Python web-app with AWS Elastic Beanstalk, and I was wondering if it's necessary to need to create a .ebextensions/xyz.config file to use resources like DynamoDB, SNS, etc
here is a sample code using boto3 and I was able to connect from my web-app and put data into the table without defining any configuration files ...
db = boto3.resource('dynamodb', region_name='us-east-1')
table = db.Table('StudentInfo')
appreciate your inputs
You do not need .ebextensions to create a DynamoDB to work with Beanstalk. However, you can, as described here. This example uses the CloudFormation template syntax to specify a DynamoDB resource. If not in a .ebextensions file, you'd create the DynamoDB through an AWS SDK/Dynamo DB console and make the endpoint available to your Django application.
You can specify an SNS topic for Beanstalk to use to publish events to or as in the above DynamoDB example, create one as a CFN resource. The difference between the two approaches is that, whereas in the former, the Beanstalk environment owns the SNS topic, in the latter, it is the underlying CloudFormation stack that does. If you want to use the SNS topic for things other than to publish environment health events to, you would use the latter approach. For example, to integrate the SNS topic with DynamoDB, you must use the latter approach (i.e. , specify it as a resource in a ebextensions file, rather than as an option setting).
You would need to switch to using IAM roles. Read more here.
I am assuming that you didn't change the default role that gets assigned to the Elastic Beanstalk (EB) instance during creation. The default instance profile role allows EB to utilize other AWS services it needs to create the various components.
Until you understand more about IAM, creating roles, and assigning permissions you can attach AWS managed permissions to this role to test your application (just search for Dynamo and SNS).

Cloudformation wildcard search with boto3

I have been tasked with converting some bash scripting used by my team that performs various cloudformation tasks into Python using the boto3 library. I am currently stuck on one item. I cannot seem to determine how to do a wildcard type search where a cloud formation stack name contains a string.
My bash version using the AWS CLI is as follows:
aws cloudformation --region us-east-1 describe-stacks --query "Stacks[?contains(StackName,'myString')].StackName" --output json > stacks.out
This works on the cli, outputting the results to a json file, but I cannot find any examples online to do a similar search for contains using boto3 with Python. Is it possible?
Thanks!
Yes, it is possible. What you are looking for is the following:
import boto3
# create a boto3 client first
cloudformation = boto3.client('cloudformation', region_name='us-east-1')
# use client to make a particular API call
response = cloudformation.describe_stacks(StackName='myString')
print(response)
# as an aside, you'd need a different client to communicate
# with a different service
# ec2 = boto3.client('ec2', region_name='us-east-1')
# regions = ec2.describe_regions()
where, response is a Python dictionary, which, among other things, will contain the description of the stack, "myString".

Categories