Within my code, I am attempting to gather the Application Default Credentials from the associated service account in Cloud Build:
from google.auth import default
credentials, project_id = default()
This works fine in my local space because I have set the environment variable GOOGLE_APPLICATION_CREDENTIALS appropriately. However, when this line is executed (via a test step in my build configuration) within Cloud Build, the following error is raised:
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials.
Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application.
For more information, please see https://cloud.google.com/docs/authentication/getting-started
This is confusing me because, according to the docs:
By default, Cloud Build uses a special service account to execute builds on your behalf. This service account is called the Cloud Build service account and it is created automatically when you enable the Cloud Build API in a Google Cloud project.
Read Here
If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the service account that is attached to the resource that is running your code.
Read Here
So why is the default call not able to access the Cloud Build service account credentials?
There is a trick: you have to define the network to use in your Docker build. Use the parameter --network=cloudbuild, like that
steps:
- name: gcr.io/cloud-builders/docker
entrypoint: 'docker'
args:
- build
- '--no-cache'
- '--network=cloudbuild'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- 'Dockerfile'
...
You can find the documentation here
Application Default Credentials (ADC) defines the method for searching for credentials.
The Cloud Build VM instance is fetching credentials from metadata via network calls to 169.254.169.254. The container you are running does not have access to the host's network, meaning the code running inside the Docker container cannot access the host's metadata. Since there are no other credentials inside your container, ADC fails with the message Could not automatically determine credentials.
Solution: Provide credentials there are accessible inside the Docker container.
One method is to store a service account JSON key file in Cloud Storage. Then configure Cloud Build to download the file. Configure the Dockerfile to copy the file into the image.
Cloud Build YAML:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://bucketname/path/service-account.json', 'service-account.json']
Dockerfile:
COPY ./service-account.json /path/service-account.json
Google provides additional services such as Secret Manager that can also be used. I prefer Cloud Storage as the ease of storing and updating credentials provides for easy documentation and management.
In considering issues regarding security, separation of privilege, and management Google Cloud offers several methods.
Another method is the one posted by Guillaume Blaquiere.
In environments with relaxed security and developers are granted broad permissions (IAM Roles), Guillaume's answer is simple and easy to implement. In tightly controlled security environments, granting Cloud Build broad permissions is a security risk.
Security often is a tradeoff between security, implementation, and ease of use. Granting IAM Roles requires careful thought on how permissions are to be used and by what/who.
Related
I have a question for the GCP connoisseurs among you.
I have an issue that I can upload to a bucket via UI and gsutil - but if I try to do this via python
df.to_csv('gs://BUCKET_NAME/test.csv')
I get a 403 insufficient permission error.
My guess at the moment is that python does this via an API and requires an extra role - to make things more confusing I am already project owner of the project of the bucket and compared to other team members did not really find lacking permissions for this specific bucket.
I use python 3.9.1 via pyenv and pandas '1.4.2'
Anyone had the same issue/ knows what role I am missing?
I checked that I have in principal rights to upload both via UI and gsutil
I used the same virtual python environemnt to read and write from bigquery to check that I can in principle use GCP data in python - this works
I have the following Roles on the Bucket
Storage Admin, Storage Object Admin, Storage Object Creator, Storage Object Viewer
gsutil and gcloud share credentials.
These credentials are not shared with other code running locally.
The quick-fix but sub-optimal solution is to:
gcloud auth application-default login
And run the code again.
It will then use your gcloud (gsutil) user credentials configured to run as if you were using a Service Account.
These credentials are stored (on Linux) in ${HOME}/.config/gcloud/application_default_credentials.json.
A better solution is to create a Service Account specifically for your app and grant it the minimal set of IAM permissions that it will need (BigQuery, GCS, ...).
For testing purposes (!) you can download the Service Account key locally.
You can then auth your code using Google's Application Default Credentials (ADC) by (on Linux):
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/key.json
python3 your_app.py
When you deploy code that leverages ADC to a Google Cloud compute service (Compute Engine, Cloud Run, ...), it can be deployed unchanged because the credentials for the compute resource will be automatically obtained from the Metadata service.
You can Google e.g. "Google IAM BigQuery" to find the documentation that lists the roles:
IAM roles for BigQuery
IAM roles for Cloud Storage
I just installed google-cloud-vision in my ec2-instance.
To run my code, I just noticed that I sould set GOOGLE_APPLICATION_CREDENTIALS. (error told me to set it. DefaultcredntialsError)
But I don't know how to get a json credentials and set it within my ec2.
There's a JSON key in my real computer, but how can I move it to my virtual computer?
Copy the service account JSON key file to the EC2 instance using SSH file copy. Set the environment variable to the full path to the file. Make sure you understand the security risks with service account keys.
For advanced users, use Workload Identity Federation to use federate AWS credentials to Google Cloud credentials so that a service account JSON key is not required.
I'm having trouble submitting an Apache Beam example from a local machine to our cloud platform.
Using gcloud auth list I can see that the correct account is currently active. I can use gsutil and the web client to interact with the file system. I can use the cloud shell to run pipelines through the python REPL.
But when I try and run the python wordcount example I get the following error:
IOError: Could not upload to GCS path gs://my_bucket/tmp: access denied.
Please verify that credentials are valid and that you have write access
to the specified path.
Is there something I am missing with regards to the credentials?
Here are my two cents after spending the whole morning on the issue.
You should make sure that you login with gcloud on your local machine, however, pay attention to the warning message that return from gcloud auth login:
WARNING: `gcloud auth login` no longer writes application default credentials.
These credentials are required for the python code to identify your credentials properly.
Solution is rather simple, just use:
gcloud auth application-default login
This will write a credentials file under: ~/.config/gcloud/application_default_credentials.json which is used for the authentication in the local development env.
You'll need to create a GCS bucket and folder for your project, then specify that as the pipeline parameter instead of using the default value.
https://cloud.google.com/storage/docs/creating-buckets
Same Error Solved after creating a bucket.
gsutil mb gs://<bucket-name-from-the-error>/
I have faced the same issue where it throws up the IO error. Things that helped me here are (not in the order):
Checking the Name of the bucket. This step helped me a lot. Bucket names are global. If you make mistake in the bucket-name while accessing your bucket then you might be accessing buckets that you have NOT created and you don't have permission to.
Checking the service account that you have filled in:
export GOOGLE_CLOUD_PROJECT= yourkeyfile.json
Activating the service account for the key file you have plugged in -
gcloud auth activate-service-account --key-file=your-key-file.json
Also, listing out the auth accounts available might help you too.
gcloud auth list
One solution might work for you. It did for me.
In the cloud shell window, click on "Launch code Editor" (The Pencil Icon). The editor will work in Chrome (not sure about Firefox), it did not work in Brave browser.
Now, browse to your code file [in the launched code editor on GCP] (.py or .java) and locate the pre-defined PROJECT and BUCKET names and replace the name with your own Project and Bucket names and save it.
Now execute the file and it should work now.
Python doesn't use gcloud auth to authenticate but it uses the environment variable GOOGLE_APPLICATION_CREDENTIALS. So before you run the python command to launch the Dataflow job, you will need to set that environment variable:
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/key"
More info on setting up the environment variable: https://cloud.google.com/docs/authentication/getting-started#setting_the_environment_variable
Then you'll have to make sure that the account you set up has the necessary permissions in your GCP project.
Permissions and service accounts:
User service account or user account: it needs the Dataflow Admin
role at the project level and to be able to act as the worker service
account (source:
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#worker_service_account).
Worker service account: it will be one worker service account per
Dataflow pipeline. This account will need the Dataflow Worker role at
the project level plus the necessary permissions to the resources
accessed by the Dataflow pipeline (source:
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#worker_service_account).
Example: if Dataflow pipeline’s input is Pub/Sub topic and output is
BigQuery table, the worker service account will need read access to
the topic as well as write permission to the BQ table.
Dataflow service account: this is the account that gets automatically
created when you enable the Dataflow API in a project. It
automatically gets the Dataflow Service Agent role at the project
level (source:
https://cloud.google.com/dataflow/docs/concepts/security-and-permissions#service_account).
There's a GAE project using the GCS to store/retrieve files. These files also need to be read by code that will run on GCE (needs C++ libraries, so therefore not running on GAE).
In production, deployed on the actual GAE > GCS < GCE, this setup works fine.
However, testing and developing locally is a different story that I'm trying to figure out.
As recommended, I'm running GAE's dev_appserver with GoogleAppEngineCloudStorageClient to access the (simulated) GCS. Files are put in the local blobstore. Great for testing GAE.
Since these is no GCE SDK to run a VM locally, whenever I refer to the local 'GCE', it's just my local development machine running linux.
On the local GCE side I'm just using the default boto library (https://developers.google.com/storage/docs/gspythonlibrary) with a python 2.x runtime to interface with the C++ code and retrieving files from the GCS. However, in development, these files are inaccessible from boto because they're stored in the dev_appserver's blobstore.
Is there a way to properly connect the local GAE and GCE to a local GCS?
For now, I gave up on the local GCS part and tried using the real GCS. The GCE part with boto is easy. The GCS part is also able to use the real GCS using an access_token so it uses the real GCS instead of the local blobstore by:
cloudstorage.common.set_access_token(access_token)
According to the docs:
access_token: you can get one by run 'gsutil -d ls' and copy the
str after 'Bearer'.
That token works for a limited amount of time, so that's not ideal. Is there a way to set a more permanent access_token?
There is convenience option to access Google Cloud Storage from development environment. You should use client library provided with Google Cloud SDK. After executing gcloud init locally you get access to your resources.
As shown in examples to Client library authentication:
# Get the application default credentials. When running locally, these are
# available after running `gcloud init`. When running on compute
# engine, these are available from the environment.
credentials = GoogleCredentials.get_application_default()
# Construct the service object for interacting with the Cloud Storage API -
# the 'storage' service, at version 'v1'.
# You can browse other available api services and versions here:
# https://developers.google.com/api-client-library/python/apis/
service = discovery.build('storage', 'v1', credentials=credentials)
Google libraries come and go like tourists in a train station. Today (2020) google-cloud-storage should work on GCE and GAE Standard Environment with Python 3.
On GAE and CGE it picks up access credentials from the environment and locally you can provide it whit a servce account JSON-file like this:
GOOGLE_APPLICATION_CREDENTIALS=../sa-b0af54dea5e.json
If you're always using "real" remote GCS, the newer gcloud is probably the best library: http://googlecloudplatform.github.io/gcloud-python/
It's really confusing how many storage client libraries there are for Python. Some are for AE only, but they often force (or at least default to) using the local mock Blobstore when running with dev_appserver.py.
Seems like gcloud is always using the real GCS, which is what I want.
It also "magically" fixes authentication when running locally.
It looks like appengine-gcs-client for Python is now only useful for production App Engine and inside dev_appserver.py, and the local examples for it have been removed from the developer docs in favor of Boto :( If you are deciding not to use the local GCS emulation, it's probably best to stick with Boto for both local testing and GCE.
If you still want to use 'google.appengine.ext.cloudstorage' though, access tokens always expire so you'll need to manually refresh it. Given your setup honestly the easiest thing to to is just call 'gsutil -d ls' from Python and parse the output to get a new token from your local credentials. You could use the API Client Library to get a token in a more 'correct' fashion, but at that point things would be getting so roundabout you might as well just be using Boto.
There is a Google Cloud Storage local / development server for this purpose: https://developers.google.com/datastore/docs/tools/devserver
Once you have set it up, create a dataset and start the GCS development server
gcd.sh create [options] <dataset-directory>
gcd.sh start [options] <dataset-directory>
Export the environment variables
export DATASTORE_HOST=http://yourmachine:8080
export DATASTORE_DATASET=<dataset_id>
Then you should be able to use the datastore connection in your code, locally.
I have an application that needs to log into a singular Drive account and perform operations on the files automatically using a cron job. Initially, I tried to use the domain administrator login to do this, however I am unable to do any testing with the domain administrator as it seems that you cannot use the test server with a domain administrator account, which makes testing my application a bit impossible!
As such, I started looking at storing arbitray oauth tokens--especially the refresh token--to log into this account automatically after the initial setup. However, all of the APIs and documentation assume that multiple individual users are logging in manually, and I cannot find functionality in the oauth APIs that allow or account for logging into anything but the currently logged in user.
How can I achieve this in a way that I can test my code on a test domain? Can I do it without writing my own oauth library and doing the oauth requests by hand? Or is there a way to get the domain administrator authorization to work on a local test server?
You can load the credentials for a single account into your datastore using the Remote API, which can be enabled in your app.yaml file:
builtins:
- remote_api: on
By executing
remote_api_shell.py -s your_app_id.appspot.com
from the command line you'll have access to a shell which can execute in the environment of your application. Before doing this, make sure you have your application deployed (more on local development below) and make sure the source for google-api-python-client is included by pip-installing it and running enable-app-engine-project /path/to/project to add it to your App Engine project.
Once you are in the remote shell (after executing the remote command above), perform the following:
from oauth2client.appengine import CredentialsModel
from oauth2client.appengine import StorageByKeyName
from oauth2client.client import OAuth2WebServerFlow
from oauth2client.tools import run
KEY_NAME = 'your_choice_here'
CREDENTIALS_PROPERTY_NAME = 'credentials'
SCOPE = 'https://www.googleapis.com/auth/drive'
storage = StorageByKeyName(CredentialsModel, KEY_NAME, CREDENTIALS_PROPERTY_NAME)
flow = OAuth2WebServerFlow(
client_id=YOUR_CLIENT_ID,
client_secret=YOUR_CLIENT_SECRET,
scope=SCOPE)
run(flow, storage)
NOTE: If you have not deployed your application with the google-api-python-client code, this will fail, because your application won't know how to make the same imports you made on your local machine, e.g. from oauth2client.appengine import CredentialsModel.
When run is called, your web browser will open and prompt you to accept the OAuth access for the client you've specified with CLIENT_ID and CLIENT_SECRET and after successfully completing, it will save an instance of CredentialsModel in the datastore of the deployed application your_app_id.appspot.com and it will store it using the KEY_NAME your provided.
After doing this, any caller in your application -- including your cron jobs -- can access those credentials by executing
storage = StorageByKeyName(CredentialsModel, KEY_NAME, CREDENTIALS_PROPERTY_NAME)
credentials = storage.get()
Local Development:
If you'd like to test this locally, you can run your application locally via
dev_appserver.py --port=PORT /path/to/project
and you can execute the same commands using the remote API shell and pointing it at your local application:
remote_api_shell.py -s localhost:PORT
Once here, you can execute the same code you did in the remote api shell and similarly an instance of CredentialsModel will be stored in the datastore of your local development server.
As above, if you don't have the correct google-api-python-client modules included, this will fail.
EDIT: This used to recommend using the Interactive Console at:
http://localhost:PORT/_ah/admin/interactive
but it was discovered that this doesn't work because socket does not work properly in the App Engine local development sandbox.
This article explains how to interact with Google Drive on behalf of users of your domain by having the Domain Administrator delegate domain-wide authority to a Service Account
This other article explains how to interact with a Drive owned by your application using a Service Account.
Note that both methods use a JWT based Service Accounts and which currently need a modified version of the google-api-python-client in order to work on App Engine.
Unlike Google App Engine Service account, JWT based Service Accounts should work with the development server.