I have the following Python3 script:
import os, json
import googleapiclient.discovery
from google.oauth2 import service_account
from google.cloud import storage
storage_client = storage.Client.from_service_account_json('gcp-sa.json')
buckets = list(storage_client.list_buckets())
print(buckets)
compute = googleapiclient.discovery.build('compute', 'v1')
def list_instances(compute, project, zone):
result = compute.instances().list(project=project, zone=zone).execute()
return result['items'] if 'items' in result else None
list_instances(compute, "my-project", "my-zone")
Only listing buckets without the rest works fine, that tells me that my service account (with has read access to the whole project) should work. How can I now list VM's? Using the code above, I get
raise exceptions.DefaultCredentialsError(_HELP_MESSAGE)
google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application. For more information, please see https://cloud.google.com/docs/authentication/getting-started
So that tells me that I somehow have to pass the service account json. How is that possible?
Thanks!!
Related
How to create a container of cloud run by python library like from google.cloud import run_v2,
I am not getting any example for creating container with by python code
Well, first you need to create a service account and credentials here https://console.cloud.google.com/apis/credentials?project=
Next, you need to either download the key or use any other authentication method, in my example, the key.
# init credentials
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file("prject-key.json")
# create client
import google.cloud.run_v2 as run_v2
run_client = run_v2.ServicesClient(credentials=credentials)
# build request
from google.cloud.run_v2 import ListServicesRequest
request = ListServicesRequest(
parent="projects/{projectnumber}/locations/{location}"
)
# response
response = run_client.list_services(request=request)
Here you can find samples: https://github.com/googleapis/python-run/tree/main/samples/generated_samples
Also keep in mind that permissions work depending on the authentication method. Somewhere the IP whitelist is indicated somewhere not
If I want to create/read/update spreadsheets using gspread, I know first have to authenticate like so:
import gspread
gc = gspread.service_account()
Where I can also specify the filename to point to a service account json key, but is there a way I can tell gspread to use the default service account credentials without pointing to a json?
My use-case is that I want to run gspread in a vm (or cloud function) that already comes with an IAM role and I can't seem to figure out where to get the json file from. I also don't want to copy the json to the vm unnecessarily.
You can use google.auth to get the credentials of the default service account.
Then you can use gspread.authorize() with these credentials:
import google.auth
import gspread
credentials, project_id = google.auth.default(
scopes=[
'https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive'
]
)
gc = gspread.authorize(credentials)
As per the use case, Cloud function has Runtime service account feature which allows you to use default service account or you can attach any service account to the cloud function.
Using this library you can do the thing without json file.
import google.auth
credentials, project_id = google.auth.default()
This code tries to list the files in the in a blob storage:
#!/usr/bin/env python3
import os
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, __version__
from datetime import datetime, timedelta
import azure.cli.core as az
print(f"Azure Blob storage v{__version__} - Python quickstart sample")
account_name = "my_account"
container_name = "my_container"
path_on_datastore = "test/path"
def _create_sas(expire=timedelta(seconds=10)) -> str:
cli = az.get_default_cli()
expire_date = datetime.utcnow() + expire
expiry_string = datetime.strftime(expire_date, "%Y-%m-%dT%H:%M:%SZ")
cmd = ["storage", "container", "generate-sas", "--name", container_name, "--account-name",
account_name, "--permissions", "lr", "--expiry", expiry_string, "--auth-mode", "login", "--as-user"]
if cli.invoke(cmd) != 0:
raise RuntimeError("Could not receive a SAS token for user {}#{}".format(
account_name, container_name))
return cli.result.result
sas = _create_sas()
blob_service_client = BlobServiceClient(
account_url=f"{account_name}.blob.core.windows.net", container_name=container_name, credential=sas)
container_client = blob_service_client.create_container(container_name)
blob_list = container_client.list_blobs()
for blob in blob_list:
print("\t" + blob.name)
That code worked quite fine a few weeks ago, but then we always get the error:
azure.core.exceptions.ClientAuthenticationError: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Does someone know what can be wrong?
PS. using Azure blob storage package of version 12.3.2.
[Edit]
Because of security concerns we are not allowed to use account keys here.
I'm not entirely sure what is wrong with your code, but it looks like your SAS token is not the expected format. Have you tested if the the SAS URL works in a browser?
Additionally, your _create_sas function seems to be creating the SAS signature with an Azure CLI command. I don't think you need to do this because the azure-storage-blob package has methods such as generate_account_sas to generate a SAS signature. This will eliminate a lot of complexity because you don't need to worry about the SAS signature format.
from datetime import datetime, timedelta
from azure.storage.blob import (
BlobServiceClient,
generate_account_sas,
ResourceTypes,
AccountSasPermissions,
)
from azure.core.exceptions import ResourceExistsError
account_name = "<account name>"
account_url = f"https://{account_name}.blob.core.windows.net"
container_name = "<container name>"
# Create SAS token credential
sas_token = generate_account_sas(
account_name=account_name,
account_key="<account key>",
resource_types=ResourceTypes(container=True),
permission=AccountSasPermissions(read=True, write=True, list=True),
expiry=datetime.utcnow() + timedelta(hours=1),
)
Which gives this SAS signature read, write and list permissions on blob containers, with an expiry time of 1 hour. You can change this to your liking.
We can then create the BlobServiceClient with this SAS signature as a credential, then create the container client to list the blobs.
# Create Blob service client to interact with storage account
# Use SAS token as credential
blob_service_client = BlobServiceClient(account_url=account_url, credential=sas_token)
# First try to create container
try:
container_client = blob_service_client.create_container(name=container_name)
# If container already exists, fetch the client
except ResourceExistsError:
container_client = blob_service_client.get_container_client(container=container_name)
# List blobs in container
for blob in container_client.list_blobs():
print(blob.name)
Note: The above is using version azure-storage-blob==12.5.0, which is the latest package. This is not too far ahead of your version, so I would probably update your code to work with latest functionality, as also provided in the documentation.
Update
If you are unable to use account keys for security reasons, then you can create a service principal and give it a Storage Blob Data Contributor role to your storage account. This gets created as an AAD application, which will have access to your storage account.
To get this setup, you can use this guide from the documentation.
Sample Code
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient
token_credential = DefaultAzureCredential()
blob_service_client = BlobServiceClient(
account_url="https://<my_account_name>.blob.core.windows.net",
credential=token_credential
)
It looks like the module is deprecated:
Starting with v5.0.0, the 'azure' meta-package is deprecated and cannot be
installed anymore. Please install the service specific packages prefixed by
azure needed for your application.
The complete list of available packages can be found at:
https://aka.ms/azsdk/python/all
A more comprehensive discussion of the rationale for this decision can be found
in the following issue:
https://github.com/Azure/azure-sdk-for-python/issues/10646
I'm building an API that uploads images to Firebase storage, everything works as expected in that regard, the problem is that the syntax makes me specify the file name in each upload, and in production mode the API will receive upload requests from multiple devices, so I need to make to code so it checks for an available id, set it for the "blob()" object, and then do a normal upload, but I have no idea how to do that. or a random name I don't care as long as it doesn't overwrite another picture
Here is my current code:
from flask_pymongo import PyMongo
import firebase_admin
from firebase_admin import credentials, auth, storage, firestore
import os
import io
cred = credentials.Certificate('service_account_key.json')
firebase_admin.initialize_app(cred, {'storageBucket': 'MY-DATABASE-NAME.appspot.com'})
bucket = storage.bucket()
blob = bucket.blob("images/newimage.png") #here is where im guessing i #should put the next available name
# "apple.png" is a sample image #for testing in my directory
with open("apple.png", "rb") as f:
blob.upload_from_file(f)
As "Klaus D."'s comment said the solution was to implement the "uuid" module
import uuid
.....
.....
blob = bucket.blob("images/" + str(uuid.uuid4()))
The access token im getting with gcloud auth print-access-token is obviously a different access token than the one i can get with some basic python code:
export GOOGLE_APPLICATION_CREDENTIALS=/the-credentials.json
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
credentials.get_access_token()
What i am trying to do is get a token that would work with:
curl -u _token:<mytoken> https://eu.gcr.io/v2/my-project/my-docker-image/tags/list
I'd prefer not to install gcloud utility as a dependency for my app, hence my tries to obtain the access token progrmatically via oath google credentials
I know this is a very old question, but I just got faced with the exact same problem of requiring an ACCESS_TOKEN in Python and not being able to generate it, and managed to make it work.
What you need to do is to use the variable credentials.token, except it won't exist once you first create the credentials object, returning None. In order to generate a token, the credentials must be used by a google.cloud library, which in my case was done by using the googleapiclient.discovery.build method:
sqladmin = googleapiclient.discovery.build('sqladmin', 'v1beta4', credentials=credentials)
response = sqladmin.instances().get(project=PROJECT_ID, instance=INSTANCE_ID).execute()
print(json.dumps(response))
After which the ACCESS_TOKEN could be properly generated using
access_token = credentials.token
I've also tested it using google.cloud storage as a way to test credentials, and it also worked, by just trying to access a bucket in GCS through the appropriate Python library:
from google.oauth2 import service_account
from google.cloud import storage
PROJECT_ID = your_project_id_here
SCOPES = ['https://www.googleapis.com/auth/sqlservice.admin']
SERVICE_ACCOUNT_FILE = '/path/to/service.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, scopes=SCOPES)
try:
list(storage.Client(project=PROJECT_ID, credentials=credentials).bucket('random_bucket').list_blobs())
except:
print("Failed because no bucket exists named 'random_bucket' in your project... but that doesn't matter, what matters is that the library tried to use the credentials and in doing so generated an access_token, which is what we're interested in right now")
access_token = credentials.token
print(access_token)
So I think there are a few questions:
gcloud auth print-access-token vs GoogleCredentials.get_application_default()
gcloud doesn't set application default credentials by default anymore when performing a gcloud auth login, so the access_token you're getting from gcloud auth print-access-token is going to be the one corresponding to the used you used to login.
As long as you follow the instructions to create ADC's for a service account, that account has the necessary permissions, and the environment from which you are executing the script has access to the ENV var and the adc.json file, you should be fine.
How to make curl work
The Docker Registry API specifies that a token exchange should happen, swapping your Basic auth (i.e. Authorization: Basic base64(_token:<gcloud_access_token>)) for a short-lived Bearer token. This process can be a bit involved, but is documented here under "How to authenticate" and "Requesting a Token". Replace auth.docker.io/token with eu.gcr.io/v2/token and service=registry.docker.io with service=eu.gcr.io, etc. Use curl -u oauth2accesstoken:<mytoken> here.
See also: How to list images and tags from the gcr.io Docker Registry using the HTTP API?
Avoid the question entirely
We have a python lib that might be relevant to your needs:
https://github.com/google/containerregistry