Creating Signed URLs for Amazon CloudFront - python

Short version: How do I make signed URLs "on-demand" to mimic Nginx's X-Accel-Redirect behavior (i.e. protecting downloads) with Amazon CloudFront/S3 using Python.
I've got a Django server up and running with an Nginx front-end. I've been getting hammered with requests to it and recently had to install it as a Tornado WSGI application to prevent it from crashing in FastCGI mode.
Now I'm having an issue with my server getting bogged down (i.e. most of its bandwidth is being used up) due to too many requests for media being made to it, I've been looking into CDNs and I believe Amazon CloudFront/S3 would be the proper solution for me.
I've been using Nginx's X-Accel-Redirect header to protect the files from unauthorized downloading, but I don't have that ability with CloudFront/S3--however they do offer signed URLs. I'm no Python expert by far and definitely don't know how to create a Signed URL properly, so I was hoping someone would have a link for how to make these URLs "on-demand" or would be willing to explain how to here, it would be greatly appreciated.
Also, is this the proper solution, even? I'm not too familiar with CDNs, is there a CDN that would be better suited for this?

Amazon CloudFront Signed URLs work differently than Amazon S3 signed URLs. CloudFront uses RSA signatures based on a separate CloudFront keypair which you have to set up in your Amazon Account Credentials page. Here's some code to actually generate a time-limited URL in Python using the M2Crypto library:
Create a keypair for CloudFront
I think the only way to do this is through Amazon's web site. Go into your AWS "Account" page and click on the "Security Credentials" link. Click on the "Key Pairs" tab then click "Create a New Key Pair". This will generate a new key pair for you and automatically download a private key file (pk-xxxxxxxxx.pem). Keep the key file safe and private. Also note down the "Key Pair ID" from amazon as we will need it in the next step.
Generate some URLs in Python
As of boto version 2.0 there does not seem to be any support for generating signed CloudFront URLs. Python does not include RSA encryption routines in the standard library so we will have to use an additional library. I've used M2Crypto in this example.
For a non-streaming distribution, you must use the full cloudfront URL as the resource, however for streaming we only use the object name of the video file. See the code below for a full example of generating a URL which only lasts for 5 minutes.
This code is based loosely on the PHP example code provided by Amazon in the CloudFront documentation.
from M2Crypto import EVP
import base64
import time
def aws_url_base64_encode(msg):
msg_base64 = base64.b64encode(msg)
msg_base64 = msg_base64.replace('+', '-')
msg_base64 = msg_base64.replace('=', '_')
msg_base64 = msg_base64.replace('/', '~')
return msg_base64
def sign_string(message, priv_key_string):
key = EVP.load_key_string(priv_key_string)
key.reset_context(md='sha1')
key.sign_init()
key.sign_update(message)
signature = key.sign_final()
return signature
def create_url(url, encoded_signature, key_pair_id, expires):
signed_url = "%(url)s?Expires=%(expires)s&Signature=%(encoded_signature)s&Key-Pair-Id=%(key_pair_id)s" % {
'url':url,
'expires':expires,
'encoded_signature':encoded_signature,
'key_pair_id':key_pair_id,
}
return signed_url
def get_canned_policy_url(url, priv_key_string, key_pair_id, expires):
#we manually construct this policy string to ensure formatting matches signature
canned_policy = '{"Statement":[{"Resource":"%(url)s","Condition":{"DateLessThan":{"AWS:EpochTime":%(expires)s}}}]}' % {'url':url, 'expires':expires}
#sign the non-encoded policy
signature = sign_string(canned_policy, priv_key_string)
#now base64 encode the signature (URL safe as well)
encoded_signature = aws_url_base64_encode(signature)
#combine these into a full url
signed_url = create_url(url, encoded_signature, key_pair_id, expires);
return signed_url
def encode_query_param(resource):
enc = resource
enc = enc.replace('?', '%3F')
enc = enc.replace('=', '%3D')
enc = enc.replace('&', '%26')
return enc
#Set parameters for URL
key_pair_id = "APKAIAZVIO4BQ" #from the AWS accounts CloudFront tab
priv_key_file = "cloudfront-pk.pem" #your private keypair file
# Use the FULL URL for non-streaming:
resource = "http://34254534.cloudfront.net/video.mp4"
#resource = 'video.mp4' #your resource (just object name for streaming videos)
expires = int(time.time()) + 300 #5 min
#Create the signed URL
priv_key_string = open(priv_key_file).read()
signed_url = get_canned_policy_url(resource, priv_key_string, key_pair_id, expires)
print(signed_url)
#Flash player doesn't like query params so encode them if you're using a streaming distribution
#enc_url = encode_query_param(signed_url)
#print(enc_url)
Make sure that you set up your distribution with a TrustedSigners parameter set to the account holding your keypair (or "Self" if it's your own account)
See Getting started with secure AWS CloudFront streaming with Python for a fully worked example on setting this up for streaming with Python

This feature is now already supported in Botocore, which is the underlying library of Boto3, the latest official AWS SDK for Python. (The following sample requires the installation of the rsa package, but you can use other RSA package too, just define your own "normalized RSA signer".)
The usage looks like this:
from botocore.signers import CloudFrontSigner
# First you create a cloudfront signer based on a normalized RSA signer::
import rsa
def rsa_signer(message):
private_key = open('private_key.pem', 'r').read()
return rsa.sign(
message,
rsa.PrivateKey.load_pkcs1(private_key.encode('utf8')),
'SHA-1') # CloudFront requires SHA-1 hash
cf_signer = CloudFrontSigner(key_id, rsa_signer)
# To sign with a canned policy::
signed_url = cf_signer.generate_presigned_url(
url, date_less_than=datetime(2015, 12, 1))
# To sign with a custom policy::
signed_url = cf_signer.generate_presigned_url(url, policy=my_policy)
Disclaimer: I am the author of that PR.

As many have commented already, the initially accepted answer doesn't apply to Amazon CloudFront in fact, insofar Serving Private Content through CloudFront requires the use of dedicated CloudFront Signed URLs - accordingly secretmike's answer has been correct, but it is meanwhile outdated after he himself took the time and Added support for generating signed URLs for CloudFront (thanks much for this!).
boto now supports a dedicated create_signed_url method and the former binary dependency M2Crypto has recently been replaced with a pure-Python RSA implementation as well, see Don't use M2Crypto for cloudfront URL signing.
As increasingly common, one can find one or more good usage examples within the related unit tests (see test_signed_urls.py), for example test_canned_policy(self) - see setUp(self) for the referenced variables self.pk_idand self.pk_str (obviously you'll need your own keys):
def test_canned_policy(self):
"""
Generate signed url from the Example Canned Policy in Amazon's
documentation.
"""
url = "http://d604721fxaaqy9.cloudfront.net/horizon.jpg?large=yes&license=yes"
expire_time = 1258237200
expected_url = "http://example.com/" # replaced for brevity
signed_url = self.dist.create_signed_url(
url, self.pk_id, expire_time, private_key_string=self.pk_str)
# self.assertEqual(expected_url, signed_url)

This is what I use for create a policy so that I can give access to multiple files with the same "signature":
import json
import rsa
import time
from base64 import b64encode
url = "http://your_domain/*"
expires = int(time.time() + 3600)
pem = """-----BEGIN RSA PRIVATE KEY-----
...
-----END RSA PRIVATE KEY-----"""
key_pair_id = 'ABX....'
policy = {}
policy['Statement'] = [{}]
policy['Statement'][0]['Resource'] = url
policy['Statement'][0]['Condition'] = {}
policy['Statement'][0]['Condition']['DateLessThan'] = {}
policy['Statement'][0]['Condition']['DateLessThan']['AWS:EpochTime'] = expires
policy = json.dumps(policy)
private_key = rsa.PrivateKey.load_pkcs1(pem)
signature = b64encode(rsa.sign(str(policy), private_key, 'SHA-1'))
print '?Policy=%s&Signature=%s&Key-Pair-Id=%s' % (b64encode(policy),
signature,
key_pair_id)
I can use it for all files under http://your_domain/* for example:
http://your_domain/image2.png?Policy...
http://your_domain/image2.png?Policy...
http://your_domain/file1.json?Policy...

secretmike's answer works, but it is better to use rsa instead of M2Crypto.
I used boto which uses rsa.
import boto
from boto.cloudfront import CloudFrontConnection
from boto.cloudfront.distribution import Distribution
expire_time = int(time.time() +3000)
conn = CloudFrontConnection('ACCESS_KEY_ID', 'SECRET_ACCESS_KEY')
##enter the id or domain name to select a distribution
distribution = Distribution(connection=conn, config=None, domain_name='', id='', last_modified_time=None, status='')
signed_url = distribution.create_signed_url(url='YOUR_URL', keypair_id='YOUR_KEYPAIR_ID_example-APKAIAZVIO4BQ',expire_time=expire_time,private_key_file="YOUR_PRIVATE_KEY_FILE_LOCATION")
Use the boto documentation

I find simple solutions do not need change s3.generate_url ways,
just select your Cloudfront config: Yes, Update bucket policy.
After that change from :
https://xxxx.s3.amazonaws.com/hello.png&Signature=sss&Expires=1585008320&AWSAccessKeyId=kkk
to
https://yyy.cloudfront.net/hello.png&Signature=sss&Expires=1585008320&AWSAccessKeyId=kkk
with yyy.cloudfront.net is your CloudFront domain
refer to: https://aws.amazon.com/blogs/developer/accessing-private-content-in-amazon-cloudfront/

Related

What is the right way to validate a StoreKit 2 transaction jwsRepresentation in python?

It's unclear from the docs what you actually do to verify the jwsRepresentation string from a StoreKit 2 transaction on the server side.
Also "signedPayload" from the Apple App Store Notifications V2 seems to be the same, but there is also no documentation around actually validating that either outside of validating it client side on device.
What gives? What do we do with this JWS/JWT?
(DISCLAIMER: I am a crypto novice so check me on this if I'm using the wrong terms, etc. throughout)
The JWS in jwsRepresentation, and the signedPayload in the Notification V2 JSON body, are JWTs — you can take one and check it out at jwt.io. The job is to validate the JWT signature and extract the payload once you're sufficiently convinced it's really from Apple. Then the payload itself contains information you can use to upgrade the user's account/etc. server side once the data is trusted.
To validate the JWT, you need to find the signature that the JWT is signed with, specified in the JWT header's "x5c" collection, validate the certificate chain, and then validate that the signature is really from Apple.
STEP ONE: Load the well-known root & intermediate certs from Apple.
import requests
from OpenSSL import crypto
ROOT_CER_URL = "https://www.apple.com/certificateauthority/AppleRootCA-G3.cer"
G6_CER_URL = "https://www.apple.com/certificateauthority/AppleWWDRCAG6.cer"
root_cert_bytes: bytes = requests.get(ROOT_CER_URL).content
root_cert = crypto.load_certificate(crypto.FILETYPE_ASN1, root_cert_bytes)
g6_cert_bytes: bytes = requests.get(G6_CER_URL).content
g6_cert = crypto.load_certificate(crypto.FILETYPE_ASN1, g6_cert_bytes)
STEP TWO: Get certificate chain out of the JWT header
import jwt # PyJWT library
# Get the signing keys out of the JWT header. The header will look like:
# {"alg": "ES256", "x5c": ["...base64 cert...", "...base64 cert..."]}
header = jwt.get_unverified_header(apple_jwt_string)
provided_certificates: List[crypto.X509] = []
for cert_base64 in header['x5c']:
cert_bytes = base64url_decode(cert_base64)
cert = crypto.load_certificate(crypto.FILETYPE_ASN1, cert_bytes)
provided_certificates.append(cert)
STEP THREE: Validate the chain is what you think it is -- this ensures the cert chain is signed by the real Apple root & intermediate certs.
# First make sure these are the root & intermediate certs from Apple:
assert provided_certificates[-2].digest('sha256') == g6_cert.digest('sha256')
assert provided_certificates[-1].digest('sha256') == root_cert.digest('sha256')
# Now validate that the cert chain is cryptographically legit:
store = crypto.X509Store()
store.add_cert(root_cert)
store.add_cert(g6_cert)
for cert in provided_certificates[:-2]:
try:
crypto.X509StoreContext(store, cert).verify_certificate()
except crypto.X509StoreContextError:
logging.error("Invalid certificate chain in JWT: %s", apple_jwt)
return None
store.add_cert(cert)
FINALLY: Load & validate the JWT using the now-trusted certificate in the header.
# Now that the cert is validated, we can use it to verify the actual signature
# of the JWT. PyJWT does not understand this certificate if we pass it in, so
# we have to get the cryptography library's version of the same key:
cryptography_version_of_key = provided_certificates[0].get_pubkey().to_cryptography_key()
try:
return jwt.decode(apple_jwt, cryptography_version_of_key, algorithms=["ES256"])
except Exception:
logging.exception("Problem validating Apple JWT")
return None
Voila you now have a validated JWT body from the App Store at your disposal.
Gist of entire solution: https://gist.github.com/taylorhughes/3968575b40dd97f851f35892931ebf3e

Generate presigned url for uploading file to google storage using python

I want to upload a image from front end to google storage using javascript ajax functionality. I need a presigned url that the server would generate which would provide authentication to my frontend to upload a blob.
How can I generate a presigned url when using my local machine.
Previously for aws s3 I would do :
pp = s3.generate_presigned_post(
Bucket=settings.S3_BUCKET_NAME,
Key='folder1/' + file_name,
ExpiresIn=20 # seconds
)
When generating a signed url for a user to just view a file stored on google storage I do :
bucket = settings.CLIENT.bucket(settings.BUCKET_NAME)
blob_name = 'folder/img1.jpg'
blob = bucket.blob(blob_name)
url = blob.generate_signed_url(
version='v4',
expiration=datetime.timedelta(minutes=1),
method='GET')
Spent 100$ on google support and 2 weeks of my time to finally find a solution.
client = storage.Client() # works on app engine standard without any credentials requirements
But if you want to use generate_signed_url() function then you need service account Json key.
Every app engine standard has a default service account. ( You can find it in IAM/service account). Create a key for that default sv account and download the key ('sv_key.json') in json format. Store that key in your Django project right next to app.yaml file. Then do the following :
from google.cloud import storage
CLIENT = storage.Client.from_service_account_json('sv_key.json')
bucket = CLIENT.bucket('bucket_name_1')
blob = bucket.blob('img1.jpg') # name of file to be saved/uploaded to storage
pp = blob.generate_signed_url(
version='v4',
expiration=datetime.timedelta(minutes=1),
method='POST')
This will work on your local machine and GAE standard. WHen you deploy your app to GAE, sv_key.json also gets deployed with Django project and hence it works.
Hope it helps you.
Editing my answer as I didn't understand the problem you were facing.
Taking a look at the comments thread in the question, as #Nick Shebanov stated, there's one possibility to accomplish what are you trying to when using GAE with flex environment.
I have been trying to do the same with GAE Standard environment with no luck so far. At this point, I would recommend opening a feature request at the public issue tracker so this gets somehow implemented.
Create a service account private key and store it in SecretManager (SM).
In settings.py retrieve that key from SecretManager and store it in a constant - SV_ACCOUNT_KEY
Override Client() class func from_service_account_json() to take json key content instead of a path to json file. This way we dont have to have a json file in our file system (locally, cloudbuild or in GAE). we can just get private key contents from SM anytime anywhere.
settings.py
secret = SecretManager()
SV_ACCOUNT_KEY = secret.access_secret_data('SV_ACCOUNT_KEY')
signed_url_mixin.py
import datetime
import json
from django.conf import settings
from google.cloud.storage.client import Client
from google.oauth2 import service_account
class CustomClient(Client):
#classmethod
def from_service_account_json(cls, json_credentials_path, *args, **kwargs):
"""
Copying everything from base func (from_service_account_json).
Instead of passing json file for private key, we pass the private key
json contents directly (since we cannot save a file on GAE).
Since its not properly written, we cannot just overwrite a class or a
func, we have to rewrite this entire func.
"""
if "credentials" in kwargs:
raise TypeError("credentials must not be in keyword arguments")
credentials_info = json.loads(json_credentials_path)
credentials = service_account.Credentials.from_service_account_info(
credentials_info
)
if cls._SET_PROJECT:
if "project" not in kwargs:
kwargs["project"] = credentials_info.get("project_id")
kwargs["credentials"] = credentials
return cls(*args, **kwargs)
class _SignedUrlMixin:
bucket_name = settings.BUCKET_NAME
CLIENT = CustomClient.from_service_account_json(settings.SV_ACCOUNT_KEY)
exp_min = 4 # expire minutes
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.bucket = self.CLIENT.bucket(self.bucket_name)
def _signed_url(self, file_name, method):
blob = self.bucket.blob(file_name)
signed_url = blob.generate_signed_url(
version='v4',
expiration=datetime.timedelta(minutes=self.exp_min),
method=method
)
return signed_url
class GetSignedUrlMixin(_SignedUrlMixin):
"""
A GET url to view file on CS
"""
def get_signed_url(self, file_name):
"""
:param file_name: name of file to be retrieved from CS.
xyz/f1.pdf
:return: GET signed url
"""
method = 'GET'
return self._signed_url(file_name, method)
class PutSignedUrlMixin(_SignedUrlMixin):
"""
A PUT url to make a put req to upload a file to CS
"""
def put_signed_url(self, file_name):
"""
:file_name: xyz/f1.pdf
"""
method = 'PUT'
return self._signed_url(file_name, method)

secure HMAC authorization with changing signature

I have a Rails app that provides a JSON API that is consumed by a python script. Security is important and I've been using HMAC to do it. the rails app and the python script both know the secret key and the signature that they encrypt with it is the URL and the body of the request.
My problem is that the signature of the request doesn't change each time. If it was intercepted then an attacker could send the exact same request with the same digest and I think it would authenticate, een though the attacker doesn't know the secret key.
So I think I need to have something like a timestamp of the request included in the signature - the problem is I don't know how to get at that in python and ruby.
This is my python code:
import hmac
import hashlib
import requests
fetch_path = url_base + '/phone_messages/pending'
fetch_body = '{}'
fetch_signature = fetch_path + ':' + fetch_body
fetch_hmac = hmac.new(api_key.encode('utf-8'), fetch_signature.encode('utf-8'), haslib.sha1).hexdigest()
and this is my ruby code:
signature = "#{request.url}:#{request.body.to_json.to_s}"
hmac_digest = OpenSSL::HMAC.hexdigest('sha1', secret_key, signature)
Question: I need to have something like a timestamp of the request included in the signature
For example:
import hmac, hashlib, datetime
api_key = 'this is a key'
fetch_path = 'http://phone_messages/pending'
fetch_body = '{}'
fetch_data = fetch_path + ':' + fetch_body
for n in range(3):
fetch_signature = fetch_data + str(datetime.datetime.now().timestamp() )
fetch_hmac = hmac.new(api_key.encode('utf-8'), fetch_signature.encode('utf-8'), hashlib.sha1).hexdigest()
print("{}:{} {}".format(n, fetch_signature, fetch_hmac))
Output:
0:http://phone_messages/pending:{}1538660666.768066 cfa49feaeaf0cdc5ec8bcf1057446c425863e83a
1:http://phone_messages/pending:{}1538660666.768358 27d0a5a9f33345babf0c824f45837d3b8863741e
2:http://phone_messages/pending:{}1538660666.768458 67298ad0e9eb8bb629fce4454f092b74ba8d6c66
I recommended, to discus Security at security.stackexchange.com.
As a starting point, read: what-is-a-auth-key-in-the-security-of-the-computers
I resolved this by putting the timestamp (seconds since epoch) in the body of the post request, or parameter of the get request. I simply used the timestamp as the signature for encoding, which means the HMAC hash is different for every request that comes in a different second.
Then to prevent an attacker just using a previously seen timestamp I verified on the server that the timestamp is not more than 5 seconds before the current.
An attacker with a really fast turn around of intercepting a communication and sending an attack could still get through, but I couldn't drop the timeout below 5 seconds because it's already getting some requests timing out.
Since the whole thing is done under SSL I think it should be secure enough.

Get url of public accessible s3 object using boto3 without expiration or security info

Running a line like:
s3_obj = boto3.resource('s3').Object(bucket, key)
s3_obj.meta.client.generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket':bucket,'Key':key})
Yields a result like:
https://my-bucket.s3.amazonaws.com/my-key/my-object-name?AWSAccessKeyId=SOMEKEY&Expires=SOMENUMBER&x-amz-security-token=SOMETOKEN
For an s3 object with public-read ACL, all the GET params are unnecessary.
I could cheat and use rewrite the URL without the GET params but that feels unclean and hacky.
How do I use boto3 to provide me with just the public link, e.g. https://my-bucket.s3.amazonaws.com/my-key/my-object-name? In other words, how do I skip the signing step in generate_presigned_url? I don't see anything like a generated_unsigned_url function.
The best solution I found is still to use the generate_presigned_url, just that the Client.Config.signature_version needs to be set to botocore.UNSIGNED.
The following returns the public link without the signing stuff.
config.signature_version = botocore.UNSIGNED
boto3.client('s3', config=config).generate_presigned_url('get_object', ExpiresIn=0, Params={'Bucket': bucket, 'Key': key})
The relevant discussions on the boto3 repository are:
https://github.com/boto/boto3/issues/110
https://github.com/boto/boto3/issues/169
https://github.com/boto/boto3/issues/1415

Google Cloud Storage Signed URLs with Google App Engine

It's frustrating to deal with the regular Signed URLs (Query String Authentication) for Google Cloud Storage.
Google Cloud Storage Signed URLs Example -> Is this really the only code available in the whole internet for generating Signed URLs for Google Cloud Storage? Should I read it all and adapt it manually for Pure Python GAE if needed?
It's ridiculous when you compare it with AWS S3 getAuthenticatedURL(), already included in any SDK...
Am I missing something obvious or does everyone face the same problem? What's the deal?
Here's how to do it in Go:
func GenerateSignedURLs(c appengine.Context, host, resource string, expiry time.Time, httpVerb, contentMD5, contentType string) (string, error) {
sa, err := appengine.ServiceAccount(c)
if err != nil {
return "", err
}
expUnix := expiry.Unix()
expStr := strconv.FormatInt(expUnix, 10)
sl := []string{
httpVerb,
contentMD5,
contentType,
expStr,
resource,
}
unsigned := strings.Join(sl, "\n")
_, b, err := appengine.SignBytes(c, []byte(unsigned))
if err != nil {
return "", err
}
sig := base64.StdEncoding.EncodeToString(b)
p := url.Values{
"GoogleAccessId": {sa},
"Expires": {expStr},
"Signature": {sig},
}
return fmt.Sprintf("%s%s?%s", host, resource, p.Encode()), err
}
I have no idea why the docs are so bad. The only other comprehensive answer on SO is great but tedious.
Enter the generate_signed_url method. Crawling down the rabbit hole you will notice that the code path when using this method is the same as the solution in the above SO post when executed on GAE. This method however is less tedious, has support for other environments, and has better error messages.
In code:
def sign_url(obj, expires_after_seconds=60):
client = storage.Client()
default_bucket = '%s.appspot.com' % app_identity.get_application_id()
bucket = client.get_bucket(default_bucket)
blob = storage.Blob(obj, bucket)
expiration_time = int(time.time() + expires_after_seconds)
url = blob.generate_signed_url(expiration_time)
return url
I came across this problem recently as well and found a solution to do this in python within GAE using the built-in service account. Use the sign_blob() function in the google.appengine.api.app_identity package to sign the signature string and use get_service_account_name() in the same package to get the the value for GoogleAccessId.
Don't know why this is so poorly documented, even knowing now that this works I can't find any hint using Google search that it should be possible to use the built-in account for this purpose. Very nice that it works though!
Check out https://github.com/GoogleCloudPlatform/gcloud-python/pull/56
In Python, this does...
import base64
import time
import urllib
from datetime import datetime, timedelta
from Crypto.Hash import SHA256
from Crypto.PublicKey import RSA
from Crypto.Signature import PKCS1_v1_5
from OpenSSL import crypto
method = 'GET'
resource = '/bucket-name/key-name'
content_md5, content_type = None, None
expiration = datetime.utcnow() + timedelta(hours=2)
expiration = int(time.mktime(expiration.timetuple()))
# Generate the string to sign.
signature_string = '\n'.join([
method,
content_md5 or '',
content_type or '',
str(expiration),
resource])
# Take our PKCS12 (.p12) key and make it into a RSA key we can use...
private_key = open('/path/to/your-key.p12', 'rb').read()
pkcs12 = crypto.load_pkcs12(private_key, 'notasecret')
pem = crypto.dump_privatekey(crypto.FILETYPE_PEM, pkcs12.get_privatekey())
pem_key = RSA.importKey(pem)
# Sign the string with the RSA key.
signer = PKCS1_v1_5.new(pem_key)
signature_hash = SHA256.new(signature_string)
signature_bytes = signer.sign(signature_hash)
signature = base64.b64encode(signature_bytes)
# Set the right query parameters.
query_params = {'GoogleAccessId': 'your-service-account#googleapis.com',
'Expires': str(expiration),
'Signature': signature}
# Return the built URL.
return '{endpoint}{resource}?{querystring}'.format(
endpoint=self.API_ACCESS_ENDPOINT, resource=resource,
querystring=urllib.urlencode(query_params))
And if you don't want to write it by your own checkout this class on GitHub.
Really easy to use
GCSSignedUrlGenerator

Categories