How to set lifecycle for bucket in Google Cloud Storage - python

I want to change lifecycle of "my-bucket". I have this piece of code.
from google.cloud import storage
client = storage.Client(project='my-project')
bucket = client.get_bucket('my-bucket')
rules = {
"action": {"type": "Delete"},
"condition": {
"age": 3
}
}
bucket.lifecycle_rules = rules
"bucket.lifecycle_rules = rules"
successfully set the lifecycle for bucket but somehow it didn't commit the change to remote side.
Can anyone help me with that?

Once you change the properties, you need to submit those changes.
Try adding this line:
bucket.patch()
https://googlecloudplatform.github.io/google-cloud-python/stable/storage-buckets.html#google.cloud.storage.bucket.Bucket.patch

You can also set lifecycle delete rules on your bucket as follows:
# age is in days
bucket.add_lifecycle_delete_rule(age=175)
# .patch() is needed to push changes to gcp
bucket.patch()
Similarly you can also change the storage class:
# set storage to nearline after 3 days
bucket.add_lifecycle_set_storage_class_rule(age=3, storage_class='NEARLINE')
bucket.patch()
See also:
https://cloud.google.com/storage/docs/samples/storage-enable-bucket-lifecycle-management

Related

how to restart instance group via python google cloud library

I am not able to find any code sample or relevant documentation on python library for google cloud
Want to restart managed instance groups all vms via cloud function.
To list instances I am using something like this
import googleapiclient.discovery
def list_instances(compute, project, zone):
result = compute.instances().list(project=project, zone=zone).execute()
return result['items'] if 'items' in result else None
in requirement file I have
google-api-python-client==2.31.0
google-auth==2.3.3
google-auth-httplib2==0.1.0
From command line this is possible via SDK ->
https://cloud.google.com/sdk/gcloud/reference/compute/instance-groups/managed/rolling-action/restart
gcloud compute instance-groups managed rolling-action restart NAME [--max-unavailable=MAX_UNAVAILABLE] [--region=REGION | --zone=ZONE] [GCLOUD_WIDE_FLAG …]
But in python I am not able to write any code.
This is an incomplete answer since the python docs are pretty unreadable to me.
Looking at the gcloud cli code (which I couldn't find an official repo for so I looked here),
the restart command is triggered by something called a "minimal action".
minimal_action = (client.messages.InstanceGroupManagerUpdatePolicy.
MinimalActionValueValuesEnum.RESTART)
In the Python docs, there's references to these fields in the applyUpdatesToInstances method.
So I think the relevant code is something similar to:
compute.instanceGroupManagers().applyUpdatesToInstances(
project=project,
zone=zone,
instanceGroupManager='NAME',
body={"allInstances": True, "minimalAction": "RESTART"},
)
There may or may not be a proper Python object for the body, the docs aren't clear.
And the result seems to be an Operation object of some kind, but I don't know if there's execute() method or not.
This is confusing, because gcloud compute instance-groups managed rolling-action is syntactic sugar that does two things:
It turns on Proactive updater, by setting appropriate UpdatePolicy on the InstanceGroupManager resource
And it changes version name on the same resource to trigger an update.
It is covered in the docs in https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups#performing_a_rolling_replace_or_restart
Compare the gcloud and API tabs to get the idea.
Unfortunately I am illiterate in Python, so I am not able to translate it into Python code :(.
Using the documentation that #Grzenio provided, use patch() method to restart the instance group. See patch documentation to check its parameters.
This could be written in python using the code below. I provided the required parameters project,zone,instanceGroupManager and body. The value of body is from the example in the documentation.
import googleapiclient.discovery
import json
project = 'your-project-id'
zone = 'us-central1-a' # the zone of your instance group
instanceGroupManager = 'instance-group-1' # instance group name
body = {
"updatePolicy": {
"minimalAction": "RESTART",
"type": "PROACTIVE"
},
"versions": [{
"instanceTemplate": "global/instanceTemplates/instance-template-1",
"name": "v2"
}]
}
compute = googleapiclient.discovery.build('compute', 'v1')
rolling_restart = compute.instanceGroupManagers().patch(
project=project,
zone=zone,
instanceGroupManager=instanceGroupManager,
body=body
)
restart_operation = rolling_restart.execute() # execute the request
print(json.dumps(restart_operation,indent=2))
This will return an operation object and the instance group should restart in the rolling fashion:
{
"id": "3206367254887659944",
"name": "operation-1638418246759-5d221f9977443-33811aed-eed3ee88",
"zone": "https://www.googleapis.com/compute/v1/projects/your-project-id/zones/us-central1-a",
"operationType": "patch",
"targetLink": "https://www.googleapis.com/compute/v1/projects/your-project-id/zones/us-central1-a/instanceGroupManagers/instance-group-1",
"targetId": "810482163278776898",
"status": "RUNNING",
"user": "serviceaccountused#your-project-id.iam.gserviceaccount.com",
"progress": 0,
"insertTime": "2021-12-01T20:10:47.654-08:00",
"startTime": "2021-12-01T20:10:47.670-08:00",
"selfLink": "https://www.googleapis.com/compute/v1/projects/your-project-id/zones/us-central1-a/operations/operation-1638418246759-5d221f9977443-33811aed-eed3ee88",
"kind": "compute#operation"
}

Configure diagnostic setting for azure database using Python SDK

I want to configure diagnostic setting for Azure database using Python. I know that I have to use DiagnosticSettingsOperations Class, and MonitorManagementClient Client, and create_or_update method to start. I am fairly new to Python development, and I am struggling to put the pieces together.
However, there is no proper examples on what parameters to pass for the DiagnosticSettingsOperations Class.
Sample code:
from azure.mgmt.monitor import MonitorManagementClient
from azure.identity import ClientSecretCredential
####### FUNCTION TO CREATE AZURE AUTHENTICATION USING SERVICE PRINCIPAL #######
def authenticateToAzureUsingServicePrincipal():
# Authenticate to Azure using Service Principal credentials
client_id = 'client_id'
client_secret = 'client_secret'
client_tenant_id = 'client_tenant_id'
# Create Azure credential object
servicePrincipalCredentialObject = ClientSecretCredential(tenant_id=client_tenant_id, client_id=client_id, client_secret=client_secret)
return servicePrincipalCredentialObject
azureCredential = authenticateToAzureUsingServicePrincipal()
monitorManagerClient = MonitorManagementClient(azureCredential)
I want to configure Diagnostic setting for Azure sql database, which selects ALL Metrics and Logs by default and sends to a Log analytics workspace. Does anyone know how to proceed further?
The code looks like below:
#other code
monitorManagerClient = MonitorManagementClient(azureCredential)
# Creates or Updates the diagnostic setting[put]
BODY = {
"workspace_id": "the resource id of the log analytics workspace",
"metrics": [
{
"category": "Basic",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
"logs": [
{
"category": "SQLInsights",
"enabled": True,
"retention_policy": {
"enabled": False,
"days": "0"
}
}
#other categories
],
# "log_analytics_destination_type": "Dedicated"
}
diagnostic_settings = self.mgmt_client.diagnostic_settings.create_or_update(RESOURCE_URI, INSIGHT_NAME, BODY)
There is an example in github, you can take a look at it. And if you want to select ALL Metrics and Logs, you should add them one by one in the metrics / logs in the BODY in the above code.

How Do I Find Errors and Debug A Cloud Function?

I have created a cloud function that when triggered is supposed to create a VM instance and run a python script.
However, the VM is not being created.
I can see the following message in the CF log, to do with my deployment:
resource.type = "cloud_function"
resource.labels.region = "europe-west2"
severity>=DEFAULT
severity=DEBUG
...
However, for the life of me, I can't see where to go to actually view the error itself.
I then Googled and found the following thread about an issue where Cloud Functions is not showing any logs.
Thinking it may be the same issue I added the recommended environment variables to my deployment but still I cant find the error anywhere in the logs.
Can anyone point me in the right direction?
Here is my cloud function code as well:
import os
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
sa_file = "key.json"
zone = "europe-west2-c"
project_id = "<<proj id>>" # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(
sa_file, scopes=scopes
)
# Create the Cloud Compute Engine service object
service = discovery.build("compute", "v1", credentials=credentials)
def create_instance(compute, project, zone, name):
# Get the latest Debian Jessie image.
image_response = (
compute.images()
.getFromFamily(project="debian-cloud", family="debian-9")
.execute()
)
source_disk_image = image_response["selfLink"]
# Configure the machine
machine_type = "zones/%s/machineTypes/n1-standard-1" % zone
config = {
"name": name,
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"kind": "compute#attachedDisk",
"type": "PERSISTENT",
"boot": True,
"mode": "READ_WRITE",
"autoDelete": True,
"deviceName": "instance-1",
"initializeParams": {
"sourceImage": "projects/my_account/global/images/instance-image3",
"diskType": "projects/my_account/zones/europe-west2-c/diskTypes/pd-standard",
"diskSizeGb": "10",
},
"diskEncryptionKey": {},
}
],
"metadata": {
"kind": "compute#metadata",
"items": [
{
"key": "startup-script",
"value": "sudo apt-get -y install python3-pip\npip3 install -r /home/will_charles/requirements.txt\ncd /home/will_peebles/\npython3 /home/will_charles/main.py",
}
],
},
"serviceAccounts": [
{
"email": "837516068454-compute#developer.gserviceaccount.com",
"scopes": ["https://www.googleapis.com/auth/cloud-platform"],
}
],
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
"tags": {"items": ["http-server", "https-server"]},
}
return compute.instances().insert(project=project, zone=zone, body=config).execute()
def run(data, context):
create_instance(service, project_id, zone, "pagespeed-vm-4")
The reason that you do not see anything in the logs for Cloud Functions is that your code is executing but is not logging the results of the API calls.
Your code is succeeding in calling the API to create a compute instance. This does not mean the API succeeded just the call itself. The API returns an operation handle that you then later call to check on status. You are not doing that, so your Cloud Function has no idea that the create instance failed.
To see logs for the create instance API, go to Operations Logging -> VM Instance. Select "All instance_id". If the API to create an instance does not succeed, there will be no instance id to select therefore you have to select all instances and then find logs related to the API call.

How to pass rds.DatabaseCluster secrets as environment variable in a ECS Task

I'm trying to set RDS Aurora credentials as environment variables to an ECS Task.
Initially I'm passing it as plaintext on environments.
I know the proper way to do it is using secrets but ApplicationLoadBalancedTaskImageOptions expects a Secret and the rds.DatabaseCluster returns another type of it.
What is the correct way to manage the credentials on this case?
db is a rds.DatabaseCluster instance
task_image_options=ecs_patterns.ApplicationLoadBalancedTaskImageOptions(
image=ecs.ContainerImage.from_registry("sonarqube:8.2-community"),
container_port=9000,
# FIXME: by documentation this is the right way to pass creds, however this fail, the database secret is not the same type than the expected
# secrets={
# "sonar.jdbc.password": ecs.Secret.from_secrets_manager(self.db.secret)
# },
environment={
'sonar.jdbc.url': url,
"sonar.jdbc.username": username,
"sonar.jdbc.password": self.db.secret.secret_value_from_json("password").to_string() #plaintext, FIXME
}
)
What a dejavu!
I posted an article about this topic two days ago:
https://medium.com/#mchlfchr/i-tell-you-a-secret-provide-database-credentials-to-an-ecs-fargate-task-in-aws-cdk-339df4e3d071
Here you clearly can spot the differences between using secrets and environment variables.
If you want to pass it as a secret, you first have to store the value in either AWS SecretsManager or AWS Parameter Store. Then you pass the ARN of the secret, from one of those two services, as the value in the ECS task definition and ECS will automatically pull the real value from SecretsManager or Parameter Store when it instantiates the container. This is documented here.
If you want to consume value from the secret store then it should secrets not environment variable.
Replace environment variable with secrets like below
"secrets": [
{
"name": "MY_KEY",
"valueFrom": "arn:aws:secretsmanager:us-west-2:12345656:secret:demo-0Nlyli"
}
]
Just place the ARN and ECS will inject the value during run time.
To set the Environment variable you need
"environment": [
{
"name": "KEY",
"value": "VALUE"
}
]
so in your case
"environment": [
{
"name": "sonar.jdbc.url",
"value": "some-url"
}
]
Usig the ts example from #mchlfchr, I got this working in python as follows
Creating a role and granting read permission on database credential
#Create iam Role for Task
self.task_role = iam.Role(
self,
id= "SonarTaskRole",
role_name= "SonarTaskRole",
assumed_by= iam.ServicePrincipal(service= "ecs-tasks.amazonaws.com"),
managed_policies= [
iam.ManagedPolicy.from_aws_managed_policy_name("service-role/AmazonECSTaskExecutionRolePolicy")
]
)
#Grant permission to the Task to read secret from SecretsManager
self.db_secret.grant_read(self.task_role)
And passing as a secret:
secrets={
"sonar.jdbc.username": ecs.Secret.from_secrets_manager(self.db_secret, field="username"),
"sonar.jdbc.password": ecs.Secret.from_secrets_manager(self.db_secret, field="password")
},

Handling S3 Bucket Trigger Event in Lambda Using Python

The AWS Lambda handler has a signature of
def lambda_handler(event, context):
However, I cannot find any documentation as to the event's structure when the trigger is an S3 Bucket receiving a put
I thought that it might be defined in the s3 console, but couldn't find that there.
Anyone have any leads?
The event from S3 to Lambda function will be in json format as shown below,
{
"Records":[
{
"eventVersion":"2.0",
"eventSource":"aws:s3",
"awsRegion":"us-east-1",
"eventTime":The time, in ISO-8601 format, for example, 1970-01-01T00:00:00.000Z, when S3 finished processing the request,
"eventName":"event-type",
"userIdentity":{
"principalId":"Amazon-customer-ID-of-the-user-who-caused-the-event"
},
"requestParameters":{
"sourceIPAddress":"ip-address-where-request-came-from"
},
"responseElements":{
"x-amz-request-id":"Amazon S3 generated request ID",
"x-amz-id-2":"Amazon S3 host that processed the request"
},
"s3":{
"s3SchemaVersion":"1.0",
"configurationId":"ID found in the bucket notification configuration",
"bucket":{
"name":"bucket-name",
"ownerIdentity":{
"principalId":"Amazon-customer-ID-of-the-bucket-owner"
},
"arn":"bucket-ARN"
},
"object":{
"key":"object-key",
"size":object-size,
"eTag":"object eTag",
"versionId":"object version if bucket is versioning-enabled, otherwise null",
"sequencer": "a string representation of a hexadecimal value used to determine event sequence,
only used with PUTs and DELETEs"
}
}
},
{
// Additional events
}
]
}
here is the link for aws documentation which can guide you. http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
I think your easiest route is just to experiment quickly:
Create a bucket using the console
Create a lambda that is triggered by puts to the bucket using the console
Ensure you choose the default execution role, so you create cloudwatch logs
The lambda function just needs to "print(event)" when called, which is then logged
Save an object to the bucket
You'll then see the event structure in the log - its pretty self explanatory.
Please refer this URL to get Event Message Structure: http://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html

Categories