Stackdriver Google Python API Access Denied - python

When trying to create a sink using the Google Cloud Python3 API Client I get the error:
RetryError: GaxError(Exception occurred in retry method that was not classified as transient, caused by <_Rendezvous of RPC that terminated with (StatusCode.PERMISSION_DENIED, The caller does not have permission)>)
The code I used was this one:
import os
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'path_to_json_secrets.json'
from google.cloud.bigquery.client import Client as bqClient
bqclient = bqClient()
ds = bqclient.dataset('dataset_name')
print(ds.access_grants)
[]
ds.delete()
ds.create()
print(ds.access_grants)
[<AccessGrant: role=WRITER, specialGroup=projectWriters>,
<AccessGrant: role=OWNER, specialGroup=projectOwners>,
<AccessGrant: role=OWNER, userByEmail=id_1#id_2.iam.gserviceaccount.com>,
<AccessGrant: role=READER, specialGroup=projectReaders>]
from google.cloud.logging.client import Client as lClient
lclient = lClient()
dest = 'bigquery.googleapis.com%s' %(ds.path)
sink = lclient.sink('sink_test', filter_='jsonPayload.project=project_name', destination=dest)
sink.create()
Don't quite understand why this is happening. When I use lclient.log_struct() I can see the logs arriving in the Logging console so I do have access to Stackdriver Logging.
Is there any mistake in this setup?
Thanks in advance.

Creating a sink requires different permissions than writing a log entry. By default service accounts are given project Editor (not Owner), which does not have permission to create sinks.
See the list of permissions required in the access control docs.
Make sure the service account you're using has logging.sinks.create permission. The simplest way to do this is to switch the service account from Editor to Owner, but it would be better to add the Logs Editor Role so you just give it the permission it needs.

Related

how to run a python code with impersonated Service Account

i am coming here after searching google but i am not able to find any answer which i can understand. Kindly help me with my query.
If i want to access GCP resource using an impersonated service account i know i can use it using commands like for example to list a bucket in a project:
gsutil -i service-account-id ls -p project-id
But how can i run a python code ex: test1.py to access the resources using impersonate service account ?
Is there any package or class that i need to use it ? if yes then how to use ? PFB the scenario and code:
I have a pub/sub topic hosted in project-A, where owner is xyz#gmail.com and I have a python code hosted in project-B where owner is abc#gmail.com.
In project-A I have created a service account where I have added abc#gmail.com to impersonate the service account which has pub/sub admin role. Now how can I access pubsub topic via my python code in project-B without using the keys ?
"""Publishes multiple messages to a Pub/Sub topic with an error handler."""
import os
from collections.abc import Callable
from concurrent import futures
from google.cloud import pubsub_v1
# os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "C:\gcp_poc\key\my-GCP-project.JSON"
project_id = "my-project-id"
topic_id = "myTopic1"
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path(project_id, topic_id)
publish_futures = []
def get_callback(publish_future: pubsub_v1.publisher.futures.Future, data: str) -> Callable[[pubsub_v1.publisher.futures.Future], None]:
def callback(publish_future: pubsub_v1.publisher.futures.Future) -> None:
try:
# Wait 60 seconds for the publish call to succeed.
print(f"Printing the publish future result here: {publish_future.result(timeout=60)}")
except futures.TimeoutError:
print(f"Publishing {data} timed out.")
return callback
for i in range(4):
data = str(i)
# When you publish a message, the client returns a future.
publish_future = publisher.publish(topic_path, data.encode("utf-8"))
# Non-blocking. Publish failures are handled in the callback function.
publish_future.add_done_callback(get_callback(publish_future, data))
publish_futures.append(publish_future)
# Wait for all the publish futures to resolve before exiting.
futures.wait(publish_futures, return_when=futures.ALL_COMPLETED)
print(f"Published messages with error handler to {topic_path}.")
Create a Service Account with the appropriate role. Then create a Service Account key file and download it. Then put the path of the key file in the "GOOGLE_APPLICATION_CREDENTIALS" environment variable. The Client Library will pick that key file and use it for further authentication/authorization. Please read this official doc to know more about how Application Default Credentials work.

Azure Python SDK Credential Refresh

I'm using the azure python sdk to programmatically connect to azure services via linux.
I can login successfully via az('login')
login found
However, when I try to create credentials and get a token, I get an error that my grant is expired:
creds = DefaultAzureCredential()
token = creds.get_token('https://database.windows.net/.default')
VisuaLStudioCodeCredential.get_token failed: Azure Active Directory error ' (invalid_grant) AADSTS50173: The provided grant has expired due to it being revoked, a fresh auth token is needed. The user might have changed or reset their password. The grant was issued on '2021-11-04T15:21:38,19517642' and the TokensValidFrom date (before which tokens are not valid) for this user is '2022-01-0819:52:17.0000000z'
How do I refresh this? I tried using az('account clear'), then az('login') but I get the same result. Is there a method specific for DefaultAzureCredential to get a refreshed token?
DefaultAzureCredential class tries to acquire a token using multiple methods in a particular order and VS Code logged in user has a higher precedence than Azure CLI logged in user.
When you use az login, you are logging in using Azure CLI and it seems you are already logged in into VS Code using some other credentials which does not have proper permissions and this is why you are getting this error.
To fix this issue, you can exclude VS Code credentials to be considered by setting exclude_visual_studio_code_credential to true. So your code would be something like:
creds = DefaultAzureCredential(exclude_visual_studio_code_credential=true)
That way DefaultAzureCredential will not take your VS Code credentials to acquire the token.

Uploading file with python returns Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>

blob.upload_from_filename(source) gives the error
raise exceptions.from_http_status(response.status_code, message, >response=response)
google.api_core.exceptions.Forbidden: 403 POST >https://www.googleapis.com/upload/storage/v1/b/bucket1-newsdata->bluetechsoft/o?uploadType=multipart: ('Request failed with status >code', 403, 'Expected one of', )
I am following the example of google cloud written in python here!
from google.cloud import storage
def upload_blob(bucket, source, des):
client = storage.Client.from_service_account_json('/path')
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket)
blob = bucket.blob(des)
blob.upload_from_filename(source)
I used gsutil to upload files, which is working fine.
Tried to list the bucket names using the python script which is also working fine.
I have necessary permissions and GOOGLE_APPLICATION_CREDENTIALS set.
This whole things wasn't working because I didn't have permission storage admin in the service account that I am using in GCP.
Allowing storage admin to my service account solved my problem.
As other answers have indicated that this is related to the issue of permission, I have found one following command as useful way to create default application credential for currently logged in user.
Assuming, you got this error, while running this code in some machine. Just following steps would be sufficient:
SSH to vm where code is running or will be running. Make sure you are user, who has permission to upload things in google storage.
Run following command:
gcloud auth application-default login
This above command will ask to create token by clicking on url. Generate token and paste in ssh console.
That's it. All your python application started as that user, will use this as default credential for storage buckets interaction.
Happy GCP'ing :)
This question is more appropriate for a support case.
As you are getting a 403, most likely you are missing a permission on IAM, the Google Cloud Platform support team will be able to inspect your resources and configurations.
This is what worked for me when the google documentation didn't work. I was getting the same error with the appropriate permissions.
import pathlib
import google.cloud.storage as gcs
client = gcs.Client()
#set target file to write to
target = pathlib.Path("local_file.txt")
#set file to download
FULL_FILE_PATH = "gs://bucket_name/folder_name/file_name.txt"
#open filestream with write permissions
with target.open(mode="wb") as downloaded_file:
#download and write file locally
client.download_blob_to_file(FULL_FILE_PATH, downloaded_file)

Trying to connect to Google cloud storage (GCS) using python

I've build the following script:
import boto
import sys
import gcs_oauth2_boto_plugin
def check_size_lzo(ds):
# URI scheme for Cloud Storage.
CLIENT_ID = 'myclientid'
CLIENT_SECRET = 'mysecret'
GOOGLE_STORAGE = 'gs'
dir_file= 'date_id={ds}/apollo_export_{ds}.lzo'.format(ds=ds)
gcs_oauth2_boto_plugin.SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('my_bucket/data/apollo/prod/'+ dir_file, GOOGLE_STORAGE)
key = uri.get_key()
if key.size < 45379959:
raise ValueError('umg lzo file is too small, investigate')
else:
print('umg lzo file is %sMB' % round((key.size/1e6),2))
if __name__ == "__main__":
check_size_lzo(sys.argv[1])
It works fine locally but when I try and run on kubernetes cluster I get the following error:
boto.exception.GSResponseError: GSResponseError: 403 Access denied to 'gs://my_bucket/data/apollo/prod/date_id=20180628/apollo_export_20180628.lzo'
I have updated the .boto file on my cluster and added my oauth client id and secret but still having the same issue.
Would really appreciate help resolving this issue.
Many thanks!
If it works in one environment and fails in another, I assume that you're getting your auth from a .boto file (or possibly from the OAUTH2_CLIENT_ID environment variable), but your kubernetes instance is lacking such a file. That you got a 403 instead of a 401 says that your remote server is correctly authenticating as somebody, but that somebody is not authorized to access the object, so presumably you're making the call as a different user.
Unless you've changed something, I'm guessing that you're getting the default Kubernetes Engine auth, with means a service account associated with your project. That service account probably hasn't been granted read permission for your object, which is why you're getting a 403. Grant it read/write permission for your GCS resources, and that should solve the problem.
Also note that by default the default credentials aren't scoped to include GCS, so you'll need to add that as well and then restart the instance.

Unauthorized error in GAE SDK, but it works once deployed

I am running this code in a small example:
from google.cloud import storage
from google.appengine.api import app_identity
class TestB(base_handler.TRNHandler):
#...
def post(self):
client = storage.Client()
bucket_name = os.environ.get('BUCKET_NAME',
app_identity.get_default_gcs_bucket_name())
bucket = client.get_bucket(bucket_name)
#...
If I deploy this code everything works as expected. But when I run it locally (SDK), I get an error: Unauthorized: 401 Invalid Credentials. What's happening and how can I fix it?
I've got a pretty strong guess, although I can't be sure without seeing your exact logs and whatnot.
The google.cloud library is smart about authorization. It uses a thing called "application default credentials." If you run that code on App Engine or on a GCE instance, the code will be able to figure out which service account is associated with that instance and authorize itself with the credentials of that account.
However, when you run the program locally, the library has no way of knowing which credentials to use, and so it just makes calls anonymously. Your bucket probably hasn't granted anonymous users access (which is good), and so the call fails with a 401.
You can, however, register credentials locally with the gcloud command:
$> gcloud auth application-default login
Run that, and the library will use whatever credentials you've used to log in for a while. Alternatively, you could also make sure that the environment variable GOOGLE_APPLICATION_CREDENTIALS points to a service account's JSON key file.
There's a bunch of documentation on exactly how Application Default Credentials pick a credential.
Alternately, if you'd prefer to specify auth right in the program, you can do that too:
storage = Storage.from_service_account_json('/path/to/key_file.json')

Categories