Short Version:
I am creating an Azure Active Directory Group, an Azure KeyVault to which the group has access, a key in that vault, a PostgresServer whose principal is a member of the group. The server is wrapped in a ComponentResource.
The server is supposed to use the key for encryption, but does not have access at first - it can only access the key when there is a time delay before trying to use the key.
Question: How can I make sure that the permissions have propagated before trying to use the key to encrypt?
Long Version
The infrastructure is the same as I have described above. It is created like this:
group = azuread.Group(
"security_group",
display_name=display_name,
description=description,
owners=None,
opts=ResourceOptions(),
security_enabled=True,
)
policy = azure.keyvault.AccessPolicyEntryArgs(
object_id=group.object_id,
tenant_id=tenant_id,
permissions=azure.keyvault.PermissionsArgs(
keys=['get', 'list', 'unwrapKey', 'wrapKey']
)
)
vault = azure.keyvault.Vault(
"key_vault",
resource_group_name=resource_group_name,
location=location,
properties=azure.keyvault.VaultPropertiesArgs(
access_policies=[policy],
# other properties
),
opts=opts
)
class Postgres(ComponentResource)
def __init__(self, group, vault):
server = azure.dbforpostgresql.Server(
"postgres_server",
identity=azure.dbforpostgresql.ResourceIdentityArgs(type="SystemAssigned"),
# other properties
)
postgres_principal_group_membership = azure_ad.GroupMember(
"postgres-group-member",
group_object_id=group.object_id,
member_object_id=server.identity.principal_id,
)
key = azure.keyvault.Key(
"encryption-key",
key_name=f"encryption-key",
# other properties
)
def make_key_uri_with_version(args):
vault_name, key_name, key_uri_with_version = args
key_version = key_uri_with_version.rsplit('/', 1)[-1]
return f"{vault_name}_{key_name}_{key_version}"
postgres_key_name = Output.all(
encryption_key_vault.name,
key.name,
key.key_uri_with_version
).apply(make_key_name)
### Everything before here is created without issues
### This resource cannot be created during the first attempt to run this - but is sucessfull during the second run
### We can also make it succeed during the first try if we do:
# import time
# time.sleep(120)
### This sleep does not need to be placed right here or inside the constructor at all.
### It will help as long as it happens after creating the vault, but before completing the constructor
azure.dbforpostgresql.ServerKey(
f"server-use-encryption-key",
key_name=postgres_key_name,
server_key_type="AzureKeyVault",
server_name=server.name,
uri=key.key_uri_with_version,
# other properties
)
Postgres(group, vault)
When this is executed for the first time, the following error occurs:
Error: resource partially created but read failed autorest/azure: Service returned an error.
Status=404
Code="ResourceNotFound"
Message="The requested resource of type 'Microsoft.DBforPostgreSQL/servers/keys' with name '<name of the key>' was not found.":
Code="AzureKeyVaultMissingPermissions"
Message="The server '<name of the postgres server>' requires following Azure Key Vault permissions: 'Get, WrapKey, UnwrapKey'. Please grant any missing permissions to the service principal with ID '<postgres server principal ID>'."
I have verified that the problem is one of timing by bluntly using time.sleep(120) before attempting to connect the encryption key, which made the process work. In order to make the code more stable/quicker (rather than waiting for a fixed time and hoping that it will be long enough), I think that checking for the actual permissions is the way to go.
Currently, I don't see any way to do this with the azure-native provider by Pulumi.
Therefore, I am thinking of using the azure API directly - which would imply a new dependency, which is not ideal in itself.
Question: Is there a better way to achieve the desired result? I've tried various explicit dependsOn values when setting the encryption key, with no success.
Thanks!
Related
I am trying to access the metrics for client accounts using the Google Ads API, through the Python client.
Currently I'm using a modified version of the get_campaign_stats_to_csv.py example, with the query:
import datetime
last_three_days = datetime.datetime.today() - datetime.timedelta(days=3)
query = """
SELECT
customer.descriptive_name,
metrics.cost_micros
FROM customer
WHERE
segments.date > '{last_three_days}'
AND segments.date <= '{today}'""".format(
last_three_days=last_three_days.strftime('%Y-%m-%d'),
today=datetime.datetime.today().strftime('%Y-%m-%d'))
It requires the commandline argument --customer_id= of the account we're reporting on, used as follows:
search_request.customer_id = customer_id # e.g., '--customer_id=1234567890'
The problem is that when I use my Manager account customer id 1234567890, I get the error:
error_code {
query_error: REQUESTED_METRICS_FOR_MANAGER
}
message: "Metrics cannot be requested for a manager account. To retrieve metrics, issue
separate requests against each client account under the manager account."
}
Which I assume means using the client id. But when I use the client ID 0987654321, I get the error:
error_code {
authorization_error: USER_PERMISSION_DENIED
}
message: "User doesn\'t have permission to access customer. Note: If you\'re accessing a
client customer, the manager\'s customer id must be set in the \'login-customer-id\'
header. See https://developers.google.com/google-ads/api/docs/concepts/call-structure#cid"
}
The link in the error message leads to the following header:
Which brings me back to square one, where the API spits the dummy when I use the Manager Account ID.
I've checked out this stack overflow question, but I think we're having different problems, as all my accounts have the red 'TEST ACCOUNT' flag next to them.
As a final note: there are two test client accounts, both which I've set up with quasi campaigns.
I wrote that example.
You can set the client ID in the command line or the YAML file. It will not work if you use a manager account ID.
I know the terminology is confusing but that's what that API expects.
If you have any further issues with it, let me know.
So I didn't have the manager account number in the google-ads.yaml. Adding it fixed my problem when I use the client ID as the commandline line argument.
Here's a picture of where I added the manager account number..
I'm trying to attach new permissions to the existing IAM role using python boto3. I'm using the append() method to attach new permissions to the IAM role but it is not really adding permissions to the role.
Python:
import boto3
iam = boto3.client('iam',aws_access_key_id=ACCESS_KEY,aws_secret_access_key=SECRET_KEY)
rolename = ROLENAME
policyname = POLICYNAME
#Get rolepolicy
response = iam.get_role_policy(RoleName=rolename,PolicyName=policyname)
add_permission = response['PolicyDocument']['Statement']
#Assume json_perm is the permission that needs to be attached inside Statement block of policydocument.
json_perm = """ {'Action':'*','Resource':'','Effect':'Allow'})"""
#Attaching new permissions to the role
add_permission.append(json_perm)
print(add_permission)
#Get response after appending
new_response = iam.get_role_policy(RoleName=rolename,PolicyName=policyname)
print(new_response)
When printing add_permission I'm able to see the new permissions got appended in the policy document.
But I'm not able to see that permission in the AWS console and also after appending If I print new_response I'm not able to see the newly added permissions in the output terminal also.
Appending new permissions to the IAM-role doesn't actually do any change to the role..?
How to attach new permissions to the IAM role PolicyDocument using python boto3?
Thanks.
Appending new permissions to the IAM-role doesn't actually do any change to the role..?
This does not work, because you are not actually updating the policy at AWS. You are just adding it to a python variable add_permission. This does not automatically translate to actual changes of the policy at AWS.
For that, you have to use put_role_policy call to AWS to update the policy.
You can try the following code:
#Get rolepolicy
response = iam.get_role_policy(
RoleName=rolename,
PolicyName=policyname)
add_permission = response['PolicyDocument']
#print(add_permission)
#Attaching new permissions to the role
#add_permission.append(json_perm)
#print(add_permission)
# NOT a good idea to allow all actions on all resources
add_permission['Statement'].append({
'Action':'*',
'Resource':'*',
'Effect':'Allow',
'Sid': 'AllowAlll'})
response = iam.put_role_policy(
RoleName=rolename,
PolicyName=policyname,
PolicyDocument=json.dumps(add_permission)
)
I'm trying to get the cpu utilization for ec2 instances for an account. My code is like following.
def GetRegions():
return array of regions
def getEC2InstanceID(RegionName):
cloudwatch = boto3.client('cloudwatch', region_name=RegionName)
response = cloudwatch.get_metric_statistics(
.
.
.)
returns array of ec2instanceID
def EC2_Average_Utilization(InstanceID, RegionName):
returns avg cpuusage
def main():
regions= GetRegions()
for i in range(len(regions)):
print(regions[i])
instance_id = getEC2InstanceID(regions[i])
print(instance_id) # prints all the instances if there is any
if (type(instance_id)==list):
for j in range(len(instance_id)):
print(instance_id[j])
print ("For InstanceID "+ instance_id[j] + ":")
EC2_Average_Utilization(instance_id[j], regions[i])
This code executes perfectly for all the regions under only one account. If I want to do the same thing for multiple AWS accounts, what will be the procedure?
n.b I've seen configuring the .aws/config by creating multiple profiles under every account in .aws/credentials, but as I'm generating the regions in the code, I don't want to specify them.
You will need to use a boto3 Session object, the 'Security Token Service (STS)', and a call to assume_role for each account/region combo. The effect is the same as the named profile - you need a role in each account with adequate permissions to call the API methods (EC2, CloudWatch, etc). Also, the target roles need a trust relationship back to the original account credentials.
sts = boto3.client('sts')
#this is called with your default credentials. Target roles need to trust this identity
creds = sts.assume_role(RoleArn='...', RoleSessionName='...')
# set up a session w/ the temporary credentials
session = boto3.Session(
aws_access_key_id=creds['Credentials']['AccessKeyId'],
aws_secret_access_key=creds['Credentials']['SecretAccessKey'],
aws_session_token=creds['Credentials']['SessionToken']
region_name='...')
# all subsequent clients/resources should be instantiated from the session object
cloudwatch = session.client('cloudwatch')
Hope this helps.
See https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sts.html#STS.Client.assume_role
So I'm trying to produce temporary globally readable URLs for my Google Cloud Storage objects using the google-cloud-storage Python library (https://googlecloudplatform.github.io/google-cloud-python/latest/storage/blobs.html) - more specifically the Blob.generate_signed_url() method. I doing this from within a Compute Engine instance in a command line Python script. And I keep getting the following error:
AttributeError: you need a private key to sign credentials.the credentials you are currently using <class 'oauth2cl
ient.service_account.ServiceAccountCredentials'> just contains a token. see https://google-cloud-python.readthedocs
.io/en/latest/core/auth.html?highlight=authentication#setting-up-a-service-account for more details.
I am aware that there are issues with doing this from within GCE (https://github.com/GoogleCloudPlatform/google-auth-library-python/issues/50) but I have created a new Service Account credentials following the instructions here: https://cloud.google.com/storage/docs/access-control/create-signed-urls-program and my key.json file most certainly includes a private key. Still I am seeing that error.
This is my code:
keyfile = "/path/to/my/key.json"
credentials = ServiceAccountCredentials.from_json_keyfile_name(keyfile)
expiration = timedelta(3) # valid for 3 days
url = blob.generate_signed_url(expiration, method="GET",
credentials=credentials)
I've read through the issue tracker here https://github.com/GoogleCloudPlatform/google-cloud-python/issues?page=2&q=is%3Aissue+is%3Aopen and nothing related jumps out so I am assuming this should work. Cannot see what's going wrong here.
I was having the same issue. Ended up fixing it by starting the storage client directly from the service account json.
storage_client = storage.Client.from_service_account_json('path_to_service_account_key.json')
I know I'm late to the party but hopefully this helps!
Currently, it's not possible to use blob.generate_signed_url without explicitly referencing credentials. (Source: Google-Cloud-Python documentation) However, you can do a workaround, as seen here, which consists of:
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
A much complete snippet for those asking where other elements come from. cc #AlbertoVitoriano
from google.auth.transport import requests
from google.auth import default, compute_engine
credentials, _ = default()
# then within your abstraction
auth_request = requests.Request()
credentials.refresh(auth_request)
signing_credentials = compute_engine.IDTokenCredentials(
auth_request,
"",
service_account_email=credentials.service_account_email
)
signed_url = signed_blob_path.generate_signed_url(
expires_at_ms,
credentials=signing_credentials,
version="v4"
)
I am using tkinter to create gui application that returns the security groups. Currently if you want to change your credentials (e.g. if you accidentally entered the wrong ones) you would have to restart the application otherwise boto3 would carry on using the old credentials.
I'm not sure why it keeps using the old credentials because I am running everything again using the currently entered credentials.
This is a snippet of the code that sets the environment variables and launches boto3. It works perfectly fine if you enter the right credentials the first time.
os.environ['AWS_ACCESS_KEY_ID'] = self.accessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = self.secretKey
self.sts_client = boto3.client('sts')
self.assumedRoleObject = self.sts_client.assume_role(
RoleArn=self.role,
RoleSessionName="AssumeRoleSession1"
)
self.credentials = self.assumedRoleObject['Credentials']
self.ec2 = boto3.resource(
'ec2',
region_name=self.region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'],
)
The credentials variables are set using:
self.accessKey = str(self.AWS_ACCESS_KEY_ID_Form.get())
self.secretKey = str(self.AWS_SECRET_ACCESS_KEY_Form.get())
self.role = str(self.AWS_ROLE_ARN_Form.get())
self.region = str(self.AWS_REGION_Form.get())
self.instanceID = str(self.AWS_INSTANCE_ID_Form.get())
Is there a way to use different credentials in boto3 without restarting the program?
You need boto3.session.Session to overwrite the access credentials.
Just do this
reference http://boto3.readthedocs.io/en/latest/reference/core/session.html
import boto3
# Assign you own access
mysession = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
# If you want to use different profile call foobar inside .aws/credentials
mysession = boto3.session.Session(profile_name="fooboar")
# Afterwards, just declare your AWS client/resource services
sqs_resource=mysession.resource("sqs")
# or client
s3_client=mysession.client("s3")
Basically, little change to your code. you just pass in the session instead of direct boto3.client/boto3.resource
self.sts_client = mysession.client('sts')
Sure, just create different sessions from botocore.session.Session object for each set of credentials:
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
Also you can leverage set_credentials method to keep 1 session an change creds on the fly:
import botocore
session - botocore.session.Session()
session.set_credentials('foo', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo'
session.set_credentials('foo1', 'bar')
client = session.create_client('s3')
client._request_signer._credentials.access_key
u'foo1'
The answers given by #mootmoot and #Vor clearly state the way of dealing with multiple credentials using a session.
#Vor's answer
import boto3
s1 = boto3.session.Session(aws_access_key_id='foo1', aws_secret_access_key='bar1')
s2 = boto3.session.Session(aws_access_key_id='foo2', aws_secret_access_key='bar2')
But some of you would be curious about
why does the boto3 client or resource behave in that manner in the first place?
Let's clear out a few points about Session and Client as they'll actually lead us to the answer to the aforementioned question.
Session
A 'Session' stores configuration state and allows you to create service clients and resources
Client
if the credentials are not passed explicitly as arguments to the boto3.client method, then the credentials configured for the session will automatically be used. You only need to provide credentials as arguments if you want to override the credentials used for this specific client
Now let's get to the code and see what actually happens when you call boto3.client()
def client(*args, **kwargs):
return _get_default_session().client(*args, **kwargs)
def _get_default_session():
if DEFAULT_SESSION is None:
setup_default_session()
return DEFAULT_SESSION
def setup_default_session(**kwargs):
DEFAULT_SESSION = Session(**kwargs)
Learnings from the above
The function boto3.client() is really just a proxy for the boto3.Session.client() method
If you once use the client, the DEFAULT_SESSION is set up and for the next consecutive creation of clients it'll keep using the DEFAULT_SESSION
The credentials configured for the DEFAULT_SESSION are used if the credentials are not explicitly passed as arguments while creating the boto3 client.
Answer
The first call to boto3.client() sets up the DEFAULT_SESSION and configures the session with the oldCredsAccessKey, oldCredsSecretKey, the already set values for env variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACESS_KEY respectively.
So even if you set new values of credentials in the environment, i.e do this
os.environ['AWS_ACCESS_KEY_ID'] = newCredsAccessKey
os.environ['AWS_SECRET_ACCESS_KEY'] = newCredsSecretKey
The upcoming boto3.client() calls still pick up the old credentials configured for the DEFAULT_SESSION
NOTE
boto3.client() call in this whole answer means that no arguments passed to the client method.
References
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3.html#client
https://boto3.amazonaws.com/v1/documentation/api/latest/_modules/boto3/session.html#Session
https://ben11kehoe.medium.com/boto3-sessions-and-why-you-should-use-them-9b094eb5ca8e