Have anyone been able to get domain-wide delegation working by using default credentials (i.e. an AppEngine default service account or otherwise derived from the GOOGLE_APPLICATION_CREDENTIALS environment variable) specifically with the Drive or Gmail API? We've been able to follow this guide to use default credentials with admin sdk APIs, but not with user centric APIs like Gmail/Drive. We really dislike the key management situation we're stuck in by deploying keys with code or loading them into GCS buckets while knowing that many GCP centric services don't have this problem (i.e. google-cloud-firestore or google-cloud-bigquery python clients).
Related
I have a question for the GCP connoisseurs among you.
I have an issue that I can upload to a bucket via UI and gsutil - but if I try to do this via python
df.to_csv('gs://BUCKET_NAME/test.csv')
I get a 403 insufficient permission error.
My guess at the moment is that python does this via an API and requires an extra role - to make things more confusing I am already project owner of the project of the bucket and compared to other team members did not really find lacking permissions for this specific bucket.
I use python 3.9.1 via pyenv and pandas '1.4.2'
Anyone had the same issue/ knows what role I am missing?
I checked that I have in principal rights to upload both via UI and gsutil
I used the same virtual python environemnt to read and write from bigquery to check that I can in principle use GCP data in python - this works
I have the following Roles on the Bucket
Storage Admin, Storage Object Admin, Storage Object Creator, Storage Object Viewer
gsutil and gcloud share credentials.
These credentials are not shared with other code running locally.
The quick-fix but sub-optimal solution is to:
gcloud auth application-default login
And run the code again.
It will then use your gcloud (gsutil) user credentials configured to run as if you were using a Service Account.
These credentials are stored (on Linux) in ${HOME}/.config/gcloud/application_default_credentials.json.
A better solution is to create a Service Account specifically for your app and grant it the minimal set of IAM permissions that it will need (BigQuery, GCS, ...).
For testing purposes (!) you can download the Service Account key locally.
You can then auth your code using Google's Application Default Credentials (ADC) by (on Linux):
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/key.json
python3 your_app.py
When you deploy code that leverages ADC to a Google Cloud compute service (Compute Engine, Cloud Run, ...), it can be deployed unchanged because the credentials for the compute resource will be automatically obtained from the Metadata service.
You can Google e.g. "Google IAM BigQuery" to find the documentation that lists the roles:
IAM roles for BigQuery
IAM roles for Cloud Storage
Within my code, I am attempting to gather the Application Default Credentials from the associated service account in Cloud Build:
from google.auth import default
credentials, project_id = default()
This works fine in my local space because I have set the environment variable GOOGLE_APPLICATION_CREDENTIALS appropriately. However, when this line is executed (via a test step in my build configuration) within Cloud Build, the following error is raised:
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials.
Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application.
For more information, please see https://cloud.google.com/docs/authentication/getting-started
This is confusing me because, according to the docs:
By default, Cloud Build uses a special service account to execute builds on your behalf. This service account is called the Cloud Build service account and it is created automatically when you enable the Cloud Build API in a Google Cloud project.
Read Here
If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the service account that is attached to the resource that is running your code.
Read Here
So why is the default call not able to access the Cloud Build service account credentials?
There is a trick: you have to define the network to use in your Docker build. Use the parameter --network=cloudbuild, like that
steps:
- name: gcr.io/cloud-builders/docker
entrypoint: 'docker'
args:
- build
- '--no-cache'
- '--network=cloudbuild'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- 'Dockerfile'
...
You can find the documentation here
Application Default Credentials (ADC) defines the method for searching for credentials.
The Cloud Build VM instance is fetching credentials from metadata via network calls to 169.254.169.254. The container you are running does not have access to the host's network, meaning the code running inside the Docker container cannot access the host's metadata. Since there are no other credentials inside your container, ADC fails with the message Could not automatically determine credentials.
Solution: Provide credentials there are accessible inside the Docker container.
One method is to store a service account JSON key file in Cloud Storage. Then configure Cloud Build to download the file. Configure the Dockerfile to copy the file into the image.
Cloud Build YAML:
- name: gcr.io/cloud-builders/gsutil
args: ['cp', 'gs://bucketname/path/service-account.json', 'service-account.json']
Dockerfile:
COPY ./service-account.json /path/service-account.json
Google provides additional services such as Secret Manager that can also be used. I prefer Cloud Storage as the ease of storing and updating credentials provides for easy documentation and management.
In considering issues regarding security, separation of privilege, and management Google Cloud offers several methods.
Another method is the one posted by Guillaume Blaquiere.
In environments with relaxed security and developers are granted broad permissions (IAM Roles), Guillaume's answer is simple and easy to implement. In tightly controlled security environments, granting Cloud Build broad permissions is a security risk.
Security often is a tradeoff between security, implementation, and ease of use. Granting IAM Roles requires careful thought on how permissions are to be used and by what/who.
Please can anyone help me with how to fetch file stored in google drive?
I've a VM Compute engine in GCP and associated service account. This service account have an access to the google drive folder.
I thought to use python script on VM while will access the File on GDrive.
Not sure how to do this.
I guess you can try to impersonate the service account you are using.
Attaching a service account to a resource
For some Google Cloud resources, you can specify a user-managed service account that the resource uses as its default identity. This process is known as attaching the service account to the resource, or associating the service account with the resource.
When a resource needs to access other Google Cloud services and resources, it impersonates the service account that is attached to itself. For example, if you attach a service account to a Compute Engine instance, and the applications on the instance use a client library to call Google Cloud APIs, those applications automatically impersonate the attached service account.
Let me know if this was helpful.
My setup is: the code is in the private repository in Github which I run from AWS EC2.
I have this doubt where should I store the API and database credentials. My feeling at the moment is that no credentials should be stored in the code, instead, I should use the AWS Secret Manager to access them but then, you also connect to AWS. What is your view on it? A disclosure, I am starting with Python, so, please, be gentle.
Never store your secrets in code. In your case I would recommend AWS Secret Manager (Or secret parameters in AWS System Manager Parameter Store) and store your secrets there.
I would recommend to create an IAM role for your EC2 which has a policy which allows the role to read the correct secrets from AWS Secret Manager. Connect the role with an instance profile and the instance profile with the EC2. This is done automatically in the AWS console but not when your using CloudFormation. An instance profile is kind of a wrapper around a role that allows the role to be attached to an instance.
In this flow your EC2 instance will be allowed to read the secrets from system manager by using the instance profile and role. Roles are the recommended way to make AWS resources interact with each other because it uses temporary credentials and restricts access.
With the above setup you should be able to read the secrets from within your code like explained here. You can use boto3 (AWS SDK for Python) to interact from within the EC2 to the secrets manager.
Can I use firebase with python to do user account creation and logging in and out?
or any other recommendations? I've read but seems like it only uses nodejs
There is an Admin SDK for Python for certain Firebase products, including Firebase Authentication.
The Admin SDK for Firebase is meant to run on servers and in other trusted environments. This means that a process using the Admin SDK has administrative access to all Firebase services. This for example allows you to easily create a new user account with the Python SDK.
But that also means that the Python SDK cannot be used to sign in with Firebase Authentication: after all, the process already runs with administrative privileges.
If you want to use the Python SDK to verify users, you should have the users sign in on the client with one of the regular Firebase Authentication SDKs. Then send the ID token from the client to your server, and use the Python SDK to verify that token.