How does one use put_bucket_policy()? It throws a MalformedPolicy error even when I try to pass in an existing valid policy:
import boto3
client = boto3.client('s3')
dict_policy = client.get_bucket_policy(Bucket = 'my_bucket')
str_policy = str(dict_policy)
response = client.put_bucket_policy(Bucket = 'my_bucket', Policy = str_policy)
* error message: *
botocore.exceptions.ClientError: An error occurred (MalformedPolicy) when calling the PutBucketPolicy operation: This policy contains invalid Json
That's because applying str to a dict doesn't turn it into a valid json, use json.dumps instead:
import boto3
import json
client = boto3.client('s3')
dict_policy = client.get_bucket_policy(Bucket = 'my_bucket')
str_policy = json.dumps(dict_policy)
response = client.put_bucket_policy(Bucket = 'my_bucket', Policy = str_policy)
Current boto3 API doesn't have a function to APPEND the bucket policy, whether add another items/elements/attributes. You need load and manipulate the JSON yourself. E.g. write script load the policy into a dict, append the "Statement" element list, then use the policy.put to replace the whole policy. Without the original statement id, user policy will be appended. HOWEVER, there is no way to tell whether later user policy will override rules of the earlier one.
For example
import boto3
s3_conn = boto3.resource('s3')
bucket_policy = s3_conn.BucketPolicy('bucket_name')
policy = s3_conn.get_bucket_policy(Bucket='bucket_name')
user_policy = { "Effect": "Allow",... }
new_policy = policy['Statement'].append(user_policy)
bucket_policy.put(Policy=new_policy)
The user don't need know the old policy in the process.
Related
I am working on a small project, to update an org policy constraints by using python.
I want to use python because I have set up Secret Manager and Impersonation.
Right now I am at this final stage, of modifying the org policy constraint
I have found the repo https://github.com/googleapis/python-org-policy/tree/40faa07298b3baa9a4d0ca26927b28fdd80aa03b/samples/generated_samples
With a code sample for creating a constraint.
I would like to modify this: "projects/project-id-from-gcp/policies/compute.skipDefaultNetworkCreation" to Enforced.
The code I have so far, is this:
from google.cloud import orgpolicy_v2
def sample_update_policy():
# Create a client
client = orgpolicy_v2.OrgPolicyClient()
# Initialize request argument(s)
request = orgpolicy_v2.UpdatePolicyRequest(
policy="""
name: "projects/project-id-from-gcp/policies/compute.skipDefaultNetworkCreation"
spec {
rules {
enforce: true
}
}
"""
)
# Make the request
response = client.update_policy(request=request)
#
# Handle the response
print(response)
sample_update_policy()
But I get the error google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument.
I do not understand what to write exactly in "CreatePolicyRequest".
I also found this, https://googleapis.dev/python/orgpolicy/1.0.2/orgpolicy_v2/types.html#google.cloud.orgpolicy_v2.types.Policy but it is not exactly clear to me.
I was looking at this https://cloud.google.com/python/docs/reference/orgpolicy/latest/google.cloud.orgpolicy_v2.services.org_policy.OrgPolicyClient#google_cloud_orgpolicy_v2_services_org_policy_OrgPolicyClient_update_policy
But i honestly do not understand how to do it.
(I do not think what I modified it is even correct. )
Could you, please, point me in the right direction?
Thank you
Your problem is that you are passing a YAML string as the parameter to UpdatePolicyRequest(). You were on the correct path with your links.
from google.cloud import orgpolicy_v2
from google.cloud.orgpolicy_v2 import types
def build_policy():
rule = types.PolicySpec.PolicyRule()
rule.enforce = True
spec = types.PolicySpec()
spec.rules.append(rule)
policy = types.Policy(
name="projects/project-id-from-gcp/policies/compute.skipDefaultNetworkCreation",
spec = spec
)
return policy
def sample_update_policy():
# Create a client
client = orgpolicy_v2.OrgPolicyClient()
policy = build_policy()
# Debug - view created policy
print(policy)
# Initialize request argument(s)
request = orgpolicy_v2.UpdatePolicyRequest(
policy=policy
)
# Make the request
response = client.update_policy(request=request)
#
# Handle the response
print(response)
sample_update_policy()
I'm trying to update AWS security group with one inbound rules using lambda function Python 3.7. For example: I would like to add my public IP with 8443 Port in existing security group. I have followed below link to achieve.
Modifying ec2 security group using lambda function
When I used this code on Lambda function with Python 3.7, it's not working.
import boto3
import hashlib
import json
import copy
import urllib2
# ID of the security group we want to update
SECURITY_GROUP_ID = "sg-XXXX"
# Description of the security rule we want to replace
SECURITY_RULE_DESCR = "My Home IP"
def lambda_handler(event, context):
new_ip_address = list(event.values())[0]
result = update_security_group(new_ip_address)
return result
def update_security_group(new_ip_address):
client = boto3.client('ec2')
response = client.describe_security_groups(GroupIds=[SECURITY_GROUP_ID])
group = response['SecurityGroups'][0]
for permission in group['IpPermissions']:
new_permission = copy.deepcopy(permission)
ip_ranges = new_permission['IpRanges']
for ip_range in ip_ranges:
if ip_range['Description'] == 'My Home IP':
ip_range['CidrIp'] = "%s/32" % new_ip_address
client.revoke_security_group_ingress(GroupId=group['GroupId'], IpPermissions=
[permission])
client.authorize_security_group_ingress(GroupId=group['GroupId'], IpPermissions=
[new_permission])
return ""
I got response as "", when I tested this code on lambda function.
The revoke_security_group_ingress() and authorize_security_group_ingress() calls in boto3 includes a Return field in the response:
Returns true if the request succeeds; otherwise, returns an error.
However, your code does not seem to be storing a response or examining its contents.
I have setup AWS Lambda function that is triggered on an AWS Cognito. The trigger on a successful email confirmation. The Lambda function is in Python3.6.
I am referring to the AWS documentation for Cognito postConfirmation trigger.
https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-post-confirmation.html
"response": {}
So far I have tried returning None, {}, '{}'(empty json string) or valid dictionary like {'status':200, 'message':'the message string'} but it is giving error.
botocore.errorfactory.InvalidLambdaResponseException: An error occurred (InvalidLambdaResponseException) when calling the ConfirmSignUp operation: Unrecognizable lambda output
What should be a valid response for the post confirmation function?
here is the part of code.
from DBConnect import user
import json
def lambda_handler(event, context):
ua = event['request']['userAttributes']
print("create user ua = ", ua)
if ('name' in ua):
name = ua['name']
else:
name = "guest"
newUser = user.create(
name = name,
uid = ua['sub'],
owner = ua['sub'],
phoneNumber = ua['phone_number'],
email = ua['email']
)
print(newUser)
return '{}' # <--- I am using literals here only.
You need to return the event object:
return event
This is not obvious in the examples they provide in the documentation. You may want to check and ensure the event object does contain a response key (it should).
I have a trial account with Azure and have uploaded some JSON files into CosmosDB. I am creating a python program to review the data but I am having trouble doing so. This is the code I have so far:
import pydocumentdb.documents as documents
import pydocumentdb.document_client as document_client
import pydocumentdb.errors as errors
url = 'https://ronyazrak.documents.azure.com:443/'
key = '' # primary key
# Initialize the Python DocumentDB client
client = document_client.DocumentClient(url, {'masterKey': key})
collection_link = '/dbs/test1/colls/test1'
collection = client.ReadCollection(collection_link)
result_iterable = client.QueryDocuments(collection)
query = { 'query': 'SELECT * FROM server s' }
I read somewhere that the key would be my primary key that I can find in my Azure account Keys. I have filled the key string with my primary key shown in the image but key here is empty just for privacy purposes.
I also read somewhere that the collection_link should be '/dbs/test1/colls/test1' if my data is in collection 'test1' Collections.
My code gets an error at the function client.ReadCollection().
That's the error I have "pydocumentdb.errors.HTTPFailure: Status code: 401
{"code":"Unauthorized","message":"The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ncolls\ndbs/test1/colls/test1\nmon, 29 may 2017 19:47:28 gmt\n\n'\r\nActivityId: 03e13e74-8db4-4661-837a-f8d81a2804cc"}"
Once this error is fixed, what is there left to do? I want to get the JSON files as a big dictionary so that I can review the data.
Am I in the right path? Am I approaching this the wrong way? How can I read documents that are in my database? Thanks.
According to your error information, it seems to be caused by the authentication failed with your key as the offical explaination said below from here.
So please check your key, but I think the key point is using pydocumentdb incorrectly. These id of Database, Collection & Document are different from their links. These APIs ReadCollection, QueryDocuments need to be pass related link. You need to retrieve all resource in Azure CosmosDB via resource link, not resource id.
According to your description, I think you want to list all documents under the collection id path /dbs/test1/colls/test1. As reference, here is my sample code as below.
from pydocumentdb import document_client
uri = 'https://ronyazrak.documents.azure.com:443/'
key = '<your-primary-key>'
client = document_client.DocumentClient(uri, {'masterKey': key})
db_id = 'test1'
db_query = "select * from r where r.id = '{0}'".format(db_id)
db = list(client.QueryDatabases(db_query))[0]
db_link = db['_self']
coll_id = 'test1'
coll_query = "select * from r where r.id = '{0}'".format(coll_id)
coll = list(client.QueryCollections(db_link, coll_query))[0]
coll_link = coll['_self']
docs = client.ReadDocuments(coll_link)
print list(docs)
Please see the details of DocumentDB Python SDK from here.
For those using azure-cosmos, the current library (2019) I opened a doc bug and provided a sample in GitHub
Sample
from azure.cosmos import cosmos_client
import json
CONFIG = {
"ENDPOINT": "ENDPOINT_FROM_YOUR_COSMOS_ACCOUNT",
"PRIMARYKEY": "KEY_FROM_YOUR_COSMOS_ACCOUNT",
"DATABASE": "DATABASE_ID", # Prolly looks more like a name to you
"CONTAINER": "YOUR_CONTAINER_ID" # Prolly looks more like a name to you
}
CONTAINER_LINK = f"dbs/{CONFIG['DATABASE']}/colls/{CONFIG['CONTAINER']}"
FEEDOPTIONS = {}
FEEDOPTIONS["enableCrossPartitionQuery"] = True
# There is also a partitionKey Feed Option, but I was unable to figure out how to us it.
QUERY = {
"query": f"SELECT * from c"
}
# Initialize the Cosmos client
client = cosmos_client.CosmosClient(
url_connection=CONFIG["ENDPOINT"], auth={"masterKey": CONFIG["PRIMARYKEY"]}
)
# Query for some data
results = client.QueryItems(CONTAINER_LINK, QUERY, FEEDOPTIONS)
# Look at your data
print(list(results))
# You can also use the list as JSON
json.dumps(list(results), indent=4)
I am trying to read 700MB file stored in S3. How ever I only require bytes from locations 73 to 1024.
I tried to find a usable solution but failed to. Would be a great help if someone could help me out.
S3 supports GET requests using the 'Range' HTTP header which is what you're after.
To specify a Range request in boto, just add a header dictionary specifying the 'Range' key for the bytes you are interested in. Adapted from Mitchell Garnaat's response:
import boto
s3 = boto.connect_s3()
bucket = s3.lookup('mybucket')
key = bucket.lookup('mykey')
your_bytes = key.get_contents_as_string(headers={'Range' : 'bytes=73-1024'})
import boto3
obj = boto3.resource('s3').Object('mybucket', 'mykey')
stream = obj.get(Range='bytes=32-64')['Body']
print(stream.read())
boto3 version from https://github.com/boto/boto3/issues/1236
Please have a look on the python script here
import boto3
region = 'us-east-1' # define your region here
bucketname = 'test' # define bucket
key = 'objkey' # s3 file
Bytes_range = 'bytes=73-1024'
client = boto3.client('s3',region_name = region)
resp = client.get_object(Bucket=bucketname,Key=key,Range=Bytes_range)
data = resp['Body'].read()