I'm making an app using Google Calendar API, and planning to build it on Heroku.
I have a problem about authentication. Usually I use credential json file for that, but this time I don't want to upload it on Heroku for security reason.
How can I make authentiation on Heroku?
For now, I put my json to an env variable, and use oauth2client's from_json method.
def get_credentials():
credentials_json = os.environ['GOOGLE_APPLICATION_CREDENTIALS']
credentials = GoogleCredentials.from_json(credentials_json)
if not credentials or credentials.invalid:
flow = client.flow_from_clientsecrets(CLIENT_SECRET_FILE, SCOPES)
flow.user_agent = APPLICATION_NAME
if flags:
credentials = tools.run_flow(flow, store, flags)
else: # Needed only for compatibility with Python 2.6
credentials = tools.run(flow, store)
print('Storing credentials to ' + credential_path)
return credentials
But this code isn't perfect. If the credentials is invalid, I want the code to write the new credentials to the env variable, not to a new file.
Is there any better way?
I spent an entire day to find the solution because it's tricky.
No matter your language, the solution will be the same.
1 - Declare your env variables from in Heroku dashboard like :
The GOOGLE_CREDENTIALS variable is the content of service account credential JSON file as is.
The GOOGLE_APPLICATION_CREDENTIALS env variable in the string "google-credentials.json"
2 - Once variables are declared, add the builpack from command line :
$ heroku buildpacks:add https://github.com/elishaterada/heroku-google-application-credentials-buildpack
3 - Make a push. Update a tiny thing and push.
4 - The buildpack will automatically generate a google-credentials.json and fill it with the content of the google credentials content.
If you failed at something, it will not work. Check the content of the google-credentials.json with the Heroku bash.
The buildpack mentioned by Maxime Boué is not working anymore because of the Heroku updates(18+). However, below is a similar buildpack which is working. It is actually a fork from the previous one.
Use the below link in the buildpack setting of your Heroku app settings
https://github.com/gerywahyunugraha/heroku-google-application-credentials-buildpack
Define in Config Vars GOOGLE_CREDENTIALS as key and content of your credential file as Value
Define in Config Vars GOOGLE_APPLICATION_CREDENTIALS as Key and google-credentials.json as Value
Redeploy the application, it should work!!
If anyone is still looking for this, I've just managed to get this working for Google Cloud Storage by storing the JSON directly in an env variable (no extra buildpacks).
You'll need to place the json credentials data into your env vars and install google-auth
Then, parse the json and pass google credentials to the storage client:
from google.cloud import storage
from google.oauth2 import service_account
# the json credentials stored as env variable
json_str = os.environ.get('GOOGLE_APPLICATION_CREDENTIALS')
# project name
gcp_project = os.environ.get('GCP_PROJECT')
# generate json - if there are errors here remove newlines in .env
json_data = json.loads(json_str)
# the private_key needs to replace \n parsed as string literal with escaped newlines
json_data['private_key'] = json_data['private_key'].replace('\\n', '\n')
# use service_account to generate credentials object
credentials = service_account.Credentials.from_service_account_info(
json_data)
# pass credentials AND project name to new client object (did not work wihout project name)
storage_client = storage.Client(
project=gcp_project, credentials=credentials)
Hope this helps!
EDIT: Clarified that this was for Google Cloud Storage. These classes will differ for other services, but from the looks of other docs the different Google Client classes should allow the passing of credentials objects.
The recommended buildpack doesn't work anymore. Here's a quick, direct way to do the same:
Set config variables:
heroku config:set GOOGLE_APPLICATION_CREDENTIALS=gcp_key.json
heroku config:set GOOGLE_CREDENTIALS=<CONTENTS OF YOU GCP KEY>
The GOOGLE_CREDENTIALS is easier to set in the Heroku dashboard.
Create a .profile file in your repo with a line to write the json file:
echo ${GOOGLE_CREDENTIALS} > /gcp_key.json
.profile is run every time the container starts.
Obviously, commit the changes to .profile and push to Heroku, which will trigger a rebuild and thus create that gcp_key.json file.
A more official Heroku documentation in this topic: https://elements.heroku.com/buildpacks/buyersight/heroku-google-application-credentials-buildpack
I also used buyersight's buildpack and it was the only one, which worked for me
As state above, the official Heroku documentation works (https://elements.heroku.com/buildpacks/buyersight/heroku-google-application-credentials-buildpack).
For PHP Laravel users though, the config variable GOOGLE_APPLICATION_CREDENTIALS should be set to ../google-credentials.json. Otherwise, PHP will not find the file.
Screenshot from Heroku
I know this is old, but there is another alternative - ie to "split" the json file and store each important field as its own environment variable.
Something like:
PRIVATE_KEY_ID=qsd
PRIVATE_KEY="-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n"
CLIENT_EMAIL=blabla#lalala.iam.gserviceaccount.com
CLIENT_ID=7777
CLIENT_X509_CERT_URL=https://www.googleapis.com/robot/v1/metadata/x509/whatever.iam.gserviceaccount.com
This can be in a .env file locally, and put on heroku using the UI or heroku config:set commands
Then in the python file, you can initialize the ServiceAccount using a dict instead of a JSON
CREDENTIALS = {
"type": "service_account",
"project_id": "iospress",
"private_key_id": os.environ["PRIVATE_KEY_ID"],
"private_key": os.environ["PRIVATE_KEY"],
"client_email": os.environ["CLIENT_EMAIL"],
"client_id": os.environ["CLIENT_ID"],
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": os.environ["CLIENT_X509_CERT_URL"]
}
credentials = ServiceAccountCredentials.from_json_keyfile_dict(CREDENTIALS, SCOPES)
It's a bit more verbose than some of the options presented here, but it works without any buildback or other
In case you do not want to use buildpack
1 - Add env variables in Heroku via dashboard or CLI:
GOOGLE_CREDENTIALS variable is the content of the service account credential JSON file.
GOOGLE_APPLICATION_CREDENTIALS = google-credentials.json
2 - Create a file called .profile on the root of your project with the following content
echo ${GOOGLE_CREDENTIALS} > /app/google-credentials.json
3 - Push your code
4 - During startup, the container starts a bash shell that runs any code in $HOME/.profile before executing the dyno’s command.
Note: For Laravel projects GOOGLE_APPLICATION_CREDENTIALS = ../google-credentials.json
You can use the Heroku Platform API to update Heroku env vars from within your app.
It seems that those buildpacks where you can upload the credentials.json file are not working as expected. I finally managed with Lepinsk's buildpack (https://github.com/lepinsk/heroku-google-cloud-buildpack.git), which requires all keys and values to be set as config vars in Heroku. It does do the job though, so lots of thanks for that!
I have done this like below:
Created two env variable
CREDENTIALS - base64 encoded value of google api credential
CONFIG_FILE_PATH - /app/.gcp/key.json (this file we will create it at run time. in heroku preinstall phase as below)
Create a preinstall script.
"heroku-prebuild": "bash preinstall.sh",
And in preinstall.sh file, decode CREDENTIALS and create a config file and update it there.
if [ "$CREDENTIALS" != "" ]; then
echo "Detected credentials. Adding credentials" >&1
echo "" >&1
# Ensure we have an gcp folder
if [ ! -d ./.gcp ]; then
mkdir -p ./.gcp
chmod 700 ./.gcp
fi
# Load the private key into a file.
echo $GCP_CREDENTIALS | base64 --decode > ./.gcp/key.json
# Change the permissions on the file to
# be read-only for this user.
chmod 400 ./.gcp/key.json
fi
If you're still having an issue running the app after following the buildpack instructions already mentioned in this article, try setting your Heroku environment variable GOOGLE_APPLICATION_CREDENTIALS to the full path instead.
GOOGLE_APPLICATION_CREDENTIALS = /app/google-credentials.json
This worked for me.
Previously, I could see that the google-credentials file was already generated by the buildpack (or .profile) but the Heroku app wasn't finding it, and giving errors as:
Error: Cannot find module './google-credentials.json'
Require stack:
- /app/server/src/config/firebase.js
- /app/server/src/middlewares/authFirebase.js
- /app/server/src/routes/v1/auth.route.js
- /app/server/src/routes/v1/index.js
- /app/server/src/app.js
- /app/server/src/index.js
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)
at Module.Hook._require.Module.require (/app/server/node_modules/require-in-the-middle/index.js:61:29)
at require (internal/modules/cjs/helpers.js:93:18)
at Object.<anonymous> (/app/server/src/config/firebase.js:3:24)
at Module._compile (internal/modules/cjs/loader.js:1085:14)
I just have to add this tip, be careful on making GOOGLE_APPLICATION_CREDENTIALS variable in heroku dashborad, it caused me a day, if you have path like that for the credential file: server\credential.json , that will not work because using the backslash , so use slash instead / :
this will work as path (without "):
server/credential.json
The simplest way I've found is to
Save the credentials as a string in a heroku ENV variable
In you app, load them into a Ruby Tempfile
Then have the GoogleDrive::Session.from_service_account_key load them from that temp file
require "tempfile"
...
google_credentials_tempfile = Tempfile.new("credentials.json")
google_credentials_tempfile.write(ENV["GOOGLE_CREDENTIALS_JSON"])
google_credentials_tempfile.rewind
session = GoogleDrive::Session.from_service_account_key(google_credentials_tempfile)
I have this in a heroku app and it works flawlessly.
Related
I'm hosting a Flask web app on Cloud Run. I'm also using Secret Manager to store Service Account keys. (I previously downloaded a JSON file with the keys)
In my code, I'm accessing the payload then using os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload to authenticate. When I deploy the app and try to visit the page, I get an Internal Service Error. Reviewing the logs, I see:
File "/usr/local/lib/python3.10/site-packages/google/auth/_default.py", line 121, in load_credentials_from_file
raise exceptions.DefaultCredentialsError(
google.auth.exceptions.DefaultCredentialsError: File {"
I can access the secret through gcloud just fine with: gcloud secrets versions access 1 --secret="<secret_id>" while acting as the Service Account.
Here is my Python code:
# Grabbing keys from Secret Manager
def access_secret_version():
# Create the Secret Manager client.
client = secretmanager.SecretManagerServiceClient()
# Build the resource name of the secret version.
name = "projects/{project_id}/secrets/{secret_id}/versions/1"
# Access the secret version.
response = client.access_secret_version(request={"name": name})
payload = response.payload.data.decode("UTF-8")
return payload
#app.route('/page/page_two')
def some_random_func():
# New way
payload = access_secret_version() # <---- calling the payload
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = payload
# Old way
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "service-account-keys.json"
I'm not technically accessing a JSON file like I was before. The payload variable is storing entire key. Is this why it's not working?
Your approach is incorrect.
When you run on a Google compute service like Cloud Run, the code runs under the identity of the compute service.
In this case, by default, Cloud Run uses the Compute Engine default service account but, it's good practice to create a Service Account for your service and specify it when you deploy it to Cloud Run (see Service accounts).
This mechanism is one of the "legs" of Application Default Credentials when your code is running on Google Cloud, you don't specify the environment variable (you also don't need to create a key) and Cloud Run service acquires the credentials from the Metadata service:
import google.auth
credentials, project_id = google.auth.default()
See google.auth package
It is bad practice to define|set an environment variable within code. By their nature, environment variables should be provided by the environment. Doing this with APPLICATION_DEFAULT_CREDENTIALS means that your code always sets this value when it should only do this when the code is running off Google Cloud.
For completeness, if you need to create Credentials from a JSON string rather than from a file contain a JSON string, you can use from_service_account_info (see google.oauth2.service_account)
Background:
I'm trying to deploy a Django app to the Google App Engine (GAE) standard environment in the python39 runtime
The database configuration is stored in a Secret Manager secret version, similar to Google's GAE Django tutorial (link)
The app is run as a user-managed service account server#myproject.iam.gserviceaccount.com, which has the appropriate permissions to access the secret, as can be confirmed using gcloud secret versions access
Problem:
In the Django settings.py module, when I try to access the secret using google.cloud.secretmanager.SecretManagerServiceClient.access_secret_version(...), I get the following CONSUMER_INVALID error:
google.api_core.exceptions.PermissionDenied: 403 Permission denied on resource project myproject. [links {
description: "Google developer console API key"
url: "https://console.developers.google.com/project/myproject/apiui/credential"
}
, reason: "CONSUMER_INVALID"
domain: "googleapis.com"
metadata {
key: "service"
value: "secretmanager.googleapis.com"
}
metadata {
key: "consumer"
value: "projects/myproject"
}
My Debugging
I cannot reproduce the error above outside of GAE;
I can confirm that the SA can access the secret:
gcloud secrets versions access latest --secret=server_env --project myproject \
--impersonate-service-account=server#myproject.iam.gserviceaccount.com
WARNING: This command is using service account impersonation. All API calls will be executed as [server#myproject.iam.gserviceaccount.com].
DATABASE_URL='postgres://django:...'
SECRET_KEY='...'
I've also confirmed I run the django app locally with service account impersonation and make the above access_secret_version(...) calls
In desperation I even created an API key for the project and hardcoded it into my settings.py file, and this also raises the same error
I've confirmed the following settings in the project:
the app is running with using the correct user-managed SA
the call to access_secret_version is being made with the correct SA (ie that the credentials are being pulled from the GAE environment correctly)
the project has the secretmanager.googleapis.com service enabled, and has billing enabled and the billing account is active
If you have any suggestions for a configuration or method to help debug this, I'd much appreciate it!
Relevant Code Snippets
app.yaml
service_account: server#myproject.iam.gserviceaccount.com
runtime: python39
handlers:
# This configures Google App Engine to serve the files in the app's static
# directory.
- url: /_static
static_dir: _static/
# This handler routes all requests not caught above to your main app. It is
# required when static routes are defined, but can be omitted (along with
# the entire handlers section) when there are no static files defined.
- url: /.*
script: auto
env_variables:
...
inbound_services:
- mail
- mail_bounce
app_engine_apis: true
Service Account Creation & Permissions
The SA is created with Terraform as below
(The SA doesn't have the role roles/secretmanager.secretAccessor, but has an IAM binding directly on the secret itself)
resource "google_service_account" "frontend_server" {
project = google_project.project.project_id
account_id = "server"
display_name = "Frontend Server Service Account"
}
resource "google_project_iam_member" "frontend_server" {
depends_on = [
google_service_account.frontend_server,
]
for_each = toset([
"roles/appengine.serviceAgent",
"roles/cloudsql.client",
"roles/cloudsql.instanceUser",
"roles/secretmanager.viewer",
"roles/storage.objectViewer",
])
project = google_project.project.project_id
role = each.key
member = "serviceAccount:${google_service_account.frontend_server.email}"
}
Django settings.py
The relevant sections of the app settings.py are shown below; the access_secret_version raises the
import logging
import environ
from google.cloud import secretmanager
import google.auth
# Load secrets from secret manager; the client is auth'd by SA IAM policies
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform']
)
secretmanager_client = secretmanager.SecretManagerServiceClient(credentials=credentials)
# Load the database connection string into the environment
secrets = [
f"projects/{GOOGLE_CLOUD_PROJECT}/secrets/server_env/versions/latest",
]
for name in secrets:
try:
logging.info(f"Reading secret {name} into django settings module...")
payload = secretmanager_client.access_secret_version(name=name).payload.data.decode("UTF-8")
env.read_env(io.StringIO(payload))
except Exception as e:
logging.error(f"Encountered error when accessing secret {name}: {e}")
logging.error(f"Client credentials during error: {secretmanager_client._transport._credentials.__dict__}")
raise e from None
You have granted the incorrect role. Have a look to that documentation page.
Secret Viewer role allows you to view the secret and versions but NOT the content.
Secret Accessor role allows you to access to secret version content.
Sigh. This was a terrible case of a hard-to-read typo in the app.yaml file. The project had a mnemonic substring with very similar letters that I had mistyped and just couldn't see.
FWIW, if anyone is running into a similar flub, you can at least avoid this one source of error:
I was passing in a project prefix string and an environment string through the app.yaml file, and then settings.py file I concatenated these strings to make the project
When running gcloud app deploy the (correct) concatenated project string also existed in my shell $GOOGLE_CLOUD_PROJECT variable, so the deployment happened correctly with the right project id
However I removed the concatenation code in settings.py in favour of the GOOGLE_CLOUD_PROJECT env variable that is always present in GAE (docs)
TLDR: DRY is good..
i'm writing this Google Cloud Function (Python)
def create_kubeconfig(request):
subprocess.check_output("curl https://sdk.cloud.google.com | bash | echo "" ",stdin=subprocess.PIPE, shell=True )
os.system("./google-cloud-sdk/install.sh")
os.system("gcloud init")
os.system("curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.0/bin/linux/amd64/kubectl")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
os.system("gcloud container clusters get-credentials **cluster name** --zone us-west2-a --project **project name**")
conf = KubeConfig()
conf.use_context('**cluster name**')
when i run the code it gives me the error
'Invalid kube-config file. ' kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found.
help me to solve it please
You have to reach programmatically the K8S API. You have the description of the API in the documentation
But it's not easy and simple to perform. However, here some inputs for achieving what you want.
First, get the GKE master IP
Then you can access to the cluster easily. Here for reading the deployment
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
response = session.get('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', verify=False)
response.raise_for_status()
print(response.json())
For creating one, you can do this
import google.auth
from google.auth.transport import requests
credentials, project_id = google.auth.default()
session = requests.AuthorizedSession(credentials)
with open("deployment.yaml", "r") as f:
data = f.read()
response = session.post('https://34.76.28.194/apis/apps/v1/namespaces/default/deployments', data=data,
headers={'content-type': 'application/yaml'}, verify=False)
response.raise_for_status()
print(response.json())
According with the object that you want to build, you have to use the correct file definition and the correct API endpoint. I don't know a way to apply a whole yaml with several definition in only one API call.
Last things, be sure to provide the correct GKE roles to the Cloud Function service Account
UPDATE
Another solution is to use Cloud Run. Indeed, with Cloud Run and thanks to the Container capability, you have the ability to install and to call system process (it's totally open because your container runs into a GVisor sandbox, but most of common usages are allowed)
The idea is the following: use a gcloud SDK base image and deploy your application on it. Then, code your app to perform system calls.
Here a working example in Go
Docker file
FROM golang:1.13 as builder
# Copy local code to the container image.
WORKDIR /app/
COPY go.mod .
ENV GO111MODULE=on
RUN go mod download
COPY . .
# Perform test for building a clean package
RUN go test -v ./...
RUN CGO_ENABLED=0 GOOS=linux go build -v -o server
# Gcloud capable image
FROM google/cloud-sdk
COPY --from=builder /app/server /server
CMD ["/server"]
Note: The image cloud-sdk image is heavy: 700Mb
The content example (only the happy path. I remove error management, and the stderr/stdout feedback for simplifying the code)
.......
// Example here: recover the yaml file into a bucket
client,_ := storage.NewClient(ctx)
reader,_ := client.Bucket("my_bucket").Object("deployment.yaml").NewReader(ctx)
content,_:= ioutil.ReadAll(reader)
// You can store locally the file into /tmp directory. It's an in-memory file system. Don't forget to purge it to avoid any out of memory crash
ioutil.WriteFile("/tmp/file.yaml",content, 0644)
// Execute external command
// 1st Recover the kube authentication
exec.Command("gcloud","container","clusters","get-credentials","cluster-1","--zone=us-central1-c").Run()
// Then interact with the cluster with kubectl tools and simply apply your description file
exec.Command("kubectl","apply", "-f","/tmp/file.yaml").Run()
.......
Instead of using gcloud inside the Cloud Function (and attempting to install it on every request, which will significantly increase the runtime of your function), you should use the google-cloud-container client library to make the same API calls directly from Python, for example:
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
project_id = 'YOUR_PROJECT_ID'
zone = 'YOUR_PROJECT_ZONE'
response = client.list_clusters(project_id, zone)
When I try to upload a sample csv data to my GAE app through appcfg.py, it shows the below 401 error.
2015-11-04 10:44:41,820 INFO client.py:571 Refreshing due to a 401 (attempt 2/2)
2015-11-04 10:44:41,821 INFO client.py:797 Refreshing access_token
Error 401: --- begin server output ---
You must be logged in as an administrator to access this.
--- end server output ---
Here is the command I tried,
appcfg.py upload_data --application=dev~app --url=http://localhost:8080/_ah/remote_api --filename=data/sample.csv
This is how we do it in order to use custom authentication.
Custom handler in app.yaml
- url: /remoteapi.*
script: remote_api.app
Custom wsgi app in remote_api.py to override CheckIsAdmin
from google.appengine.ext.remote_api import handler
from google.appengine.ext import webapp
import re
MY_SECRET_KEY = 'MAKE UP PASSWORD HERE' # make one up, use the same one in the shell command
cookie_re = re.compile('^"?([^:]+):.*"?$')
class ApiCallHandler(handler.ApiCallHandler):
def CheckIsAdmin(self):
"""Determine if admin access should be granted based on the
auth cookie passed with the request."""
login_cookie = self.request.cookies.get('dev_appserver_login', '')
match = cookie_re.search(login_cookie)
if (match and match.group(1) == MY_SECRET_KEY
and 'X-appcfg-api-version' in self.request.headers):
return True
else:
self.redirect('/_ah/login')
return False
app = webapp.WSGIApplication([('.*', ApiCallHandler)])
From here we script the uploading of data that was exported from our live app. Use the same password that you made up in the python script above.
echo "MAKE UP PASSWORD HERE" | appcfg.py upload_data --email=some#example.org --passin --url=http://localhost:8080/remoteapi --num_threads=4 --kind=WebHook --filename=webhook.data --db_filename=bulkloader-progress-webhook.sql3
WebHook and webhook.data are specific to the Kind that we exported from production.
I had a similar issue, where appcfg.py was not giving me any credentials dialog, so I could not authenticate. I downgraded from GAELauncher 1.27 to 1.26, and the authentication started working again.
Temporary solution: go to https://console.developers.google.com/storage/browser/appengine-sdks/featured/ to get version 1.9.26
Submitted bug report: https://code.google.com/p/google-cloud-sdk/issues/detail?id=340
You cannot use the appcfg.py upload_data command with the development server [edit: as is; see Josh J's answer]. It only works with the remote_api endpoint running on App Engine and authenticated with OAuth2.
An easy way to load data into the dev server's datastore is to create an endpoint that reads a CSV file and creates the appropriate datastore entities, then hit it with the browser. (Be sure to remove the endpoint before deploying the app, or restrict access to the URL with login: admin.)
You must have an oauth token for a google account that is not an admin of that project. Try passing the --no_cookies flag so that it prompts for authentication again.
Maybe this has something to do with it? From the docs
Connecting your app to the local development server
To use the local development server for your app running locally, you
need to do the following:
Set environment variables. Add or modify your app's Datastore
connection code. Setting environment variables
Create an environment variable DATASTORE_HOST and set it to the host
and port on which the local development server is listening. The
default host and port is http://localhost:8080. (Note: If you use the
port and/or host command line arguments to change these defaults, be
sure to adjust DATASTORE_HOST accordingly.) The following bash shell
example shows how to set this variable:
export DATASTORE_HOST=http://localhost:8080 Create an environment
variable named DATASTORE_DATASET and set it to your dataset ID, as
shown in the following bash shell example:
export DATASTORE_DATASET= Note: Both the Python and Java
client libraries look for the environment variables DATASTORE_HOST and
DATASTORE_DATASET.
Link to Docs
https://cloud.google.com/datastore/docs/tools/devserver
I'm new to Python and Boto, I've managed to sort out file uploads from my server to S3.
But once I've uploaded a new file I want to do an invalidation request.
I've got the code to do that:
import boto
print 'Connecting to CloudFront'
cf = boto.connect_cloudfront()
cf.create_invalidation_request(aws_distribution_id, ['/testkey'])
But I'm getting an error: NameError: name 'aws_distribution_id' is not defined
I guessed that I could add the distribution id to the ~/.boto config, like the aws_secret_access_key etc:
$ cat ~/.boto
[Credentials]
aws_access_key_id = ACCESS-KEY-ID-GOES-HERE
aws_secret_access_key = ACCESS-KEY-SECRET-GOES-HERE
aws_distribution_id = DISTRIBUTION-ID-GOES-HERE
But that's not actually listed in the docs, so I'm not too surprised it failed:
http://docs.pythonboto.org/en/latest/boto_config_tut.html
My problem is I don't want to add the distribution_id to the script as I run it on both my live and staging servers, and I have different S3 and CloudFront set ups for both.
So I need the distribution_id to change per server, which is how I've got the the AWS access keys set.
Can I add something else to the boto config or is there a python user defaults I could add it to?
Since you can have multiple cloudfront distributions per account, it wouldn't make sense to configure it in .boto.
You could have another config file specific to your own environment and run your invalidation script using the config file as argument (or have the same file, but with different data depending on your env).
I solved this by using the ConfigParser. I added the following to the top of my script:
import ConfigParser
# read conf
config = ConfigParser.ConfigParser()
config.read('~/my-app.cnf')
distribution_id = config.get('aws_cloudfront', 'distribution_id')
And inside the conf file at ~/.my-app.cnf
[aws_cloudfront]
distribution_id = DISTRIBUTION_ID
So on my live server I just need to drop the cnf file into the user's home dir and change the distribution_id