I am following the example in https://developers.google.com/storage/docs/gspythonlibrary#credentials
I created client/secret pair by choosing in the dev. console "create new client id", "installed application", "other".
I have the following code in my python script:
import boto
from gcs_oauth2_boto_plugin.oauth2_helper import SetFallbackClientIdAndSecret
CLIENT_ID = 'my_client_id'
CLIENT_SECRET = 'xxxfoo'
SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('foobartest2014', 'gs')
header_values = {"x-goog-project-id": proj_id}
uri.create_bucket(headers=header_values)
and it fails with the following error:
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 555, in create_bucket
conn = self.connect()
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 140, in connect
**connection_args)
File "/usr/local/lib/python2.7/dist-packages/boto/gs/connection.py", line 47, in __init__
suppress_consec_slashes=suppress_consec_slashes)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 572, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto/auth.py", line 883, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 3 handlers were checked. ['OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials
I have been struggling with this for the last couple of days, turns out the boto stuff, and that gspythonlibrary are all totally obsolete.
The latest example code showing how to use/authenticate Google Cloud Storage is here:
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/storage/api
You need to provide a client/secret pair in a .boto file, and then run gsutil config.
It will create a refresh token, and then should work!
For more info, see https://developers.google.com/storage/docs/gspythonlibrary#credentials
U can also make console application for gsutil commands authentication and gsutil cp, rm, gsutil config -a pass through console application to cloud SDK then execute
Related
I am trying to access a secret on GCP Secrets and I get the following error :
in get_total_results "api_key": get_credentials("somekey").get("somekey within key"), File
"/helper.py", line 153, in get_credentials response = client.access_secret_version(request={"name": resource_name})
File "/usr/local/lib/python3.8/site-packages/google/cloud/secretmanager_v1/services/secret_manager_service/client.py",
line 1136, in access_secret_version response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs) File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 285, in retry_wrapped_func return retry_target( File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py",
line 188, in retry_target return target() File "/usr/local/lib/python3.8/site-packages/google/api_core/grpc_helpers.py",
line 69, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc) File "<string>",
line 3, in raise_from google.api_core.exceptions.PermissionDenied:
403 Request had insufficient authentication scopes.
The code is fairly simple:-
def get_credentials(secret_id):
project_id = os.environ.get("PROJECT_ID")
resource_name = f"projects/{project_id}/secrets/{secret_id}/versions/1"
client = secretmanager.SecretManagerServiceClient()
response = client.access_secret_version(request={"name": resource_name})
secret_string = response.payload.data.decode("UTF-8")
secret_dict = json.loads(secret_string)
return secret_dict
So, what I have is a cloud function, which is deployed using Triggers, and uses a service account which has the Owner role.
The cloud function triggers a Kubernete Work Job and creates a container, which downloads a repo inside the container and executes it.
Dockerfile is:
FROM gcr.io/project/repo:latest
FROM python:3.8-slim-buster
COPY . /some_dir
WORKDIR /some_dir
COPY --from=0 ./repo /a_repo
RUN pip install -r requirements.txt & pip install -r a_repo/requirements.txt
ENTRYPOINT ["python3" , "main.py"]
The GCE instance might not have the correct authentication scope.
From: https://developers.google.com/identity/protocols/oauth2/scopes#secretmanager
https://www.googleapis.com/auth/cloud-platform is the required scope.
When creating the GCE instance you need to select the option that gives the instance the correct scope to call out to cloud APIs:
Running a DataFlow streaming job using 2.11.0 release.
I get the following authentication error after few hours:
File "streaming_twitter.py", line 188, in <lambda>
File "streaming_twitter.py", line 102, in estimate
File "streaming_twitter.py", line 84, in estimate_aiplatform
File "streaming_twitter.py", line 42, in get_service
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery.py", line 227, in build credentials=credentials)
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/_helpers.py", line 130, in positional_wrapper return wrapped(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/discovery.py", line 363, in build_from_document credentials = _auth.default_credentials()
File "/usr/local/lib/python2.7/dist-packages/googleapiclient/_auth.py", line 42, in default_credentials credentials, _ = google.auth.default()
File "/usr/local/lib/python2.7/dist-packages/google/auth/_default.py", line 306, in default raise exceptions.DefaultCredentialsError(_HELP_MESSAGE) DefaultCredentialsError: Could not automatically determine credentials. Please set GOOGLE_APPLICATION_CREDENTIALS or explicitly create credentials and re-run the application.
This Dataflow job performs an API request to AI Platform prediction
and seems to be Authentication token is expiring.
Code snippet:
def get_service():
# If it hasn't been instantiated yet: do it now
return discovery.build('ml', 'v1',
discoveryServiceUrl=DISCOVERY_SERVICE,
cache_discovery=True)
I tried adding the following lines to the service function:
os.environ[
"GOOGLE_APPLICATION_CREDENTIALS"] = "/tmp/key.json"
But I get:
DefaultCredentialsError: File "/tmp/key.json" was not found. [while running 'generatedPtransform-930']
I assume because file is not in DataFlow machine.
Other option is to use developerKey param in build method, but doesnt seems supported by AI Platform prediction, I get error:
Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project."> [while running 'generatedPtransform-22624']
Looking to understand how to fix it and what is the best practice?
Any suggestions?
Complete logs here
Complete code here
Setting os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = '/tmp/key.json' only works locally with the DirectRunner. Once deploying to a distributed runner like Dataflow, each worker won't be able to find the local file /tmp/key.json.
If you want each worker to use a specific service account, you can tell Beam which service account to use to identify workers.
First, grant the roles/dataflow.worker role to the service account you want your workers to use. There is no need to download the service account key file :)
Then if you're letting PipelineOptions parse your command line arguments, you can simply use the service_account_email option, and specify it like --service_account_email your-email#your-project.iam.gserviceaccount.com when running your pipeline.
The service account pointed by your GOOGLE_APPLICATION_CREDENTIALS is simply used to start the job, but each worker uses the service account specified by the service_account_email. If a service_account_email is not passed, it defaults to the email from your GOOGLE_APPLICATION_CREDENTIALS file.
My goal is to be able to run Python program using boto3 to access DynamoDB without any local configuration. I've been following this AWS document https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) and it seems to be feasible using the 'IAM role' option https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#iam-role. This means I don't have anything configured locally.
However, as I attached a role with DynamoDB access permission to the EC2 instance the Python program is running and ran boto3.resources('dynamodb') I kept getting the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/__init__.py", line 100, in resource
return _get_default_session().resource(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/session.py", line 389, in resource
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/session.py", line 839, in create_client
client_config=config, api_version=api_version)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 86, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 328, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/args.py", line 47, in get_client_args
endpoint_url, is_secure, scoped_config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/args.py", line 117, in compute_client_args
service_name, region_name, endpoint_url, is_secure)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 402, in resolve
service_name, region_name)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/regions.py", line 122, in construct_endpoint
partition, service_name, region_name)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/regions.py", line 135, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
I've searched the internet and it seems most of the solutions pointing to have local configuration (e.g. ~/.aws/config, boto3 config file, etc.).
Also, I have verified that from EC2 instance, I am able to get the region from instance metadata:
$ curl --silent http://169.254.169.254/latest/dynamic/instance-identity/document
{
...
"region" : "us-east-2",
...
}
My workaround right now is to provide an environment variable AWS_DEFAULT_REGION passing via Docker command line.
Here is the simple code I have to replicate the issue:
>>>import boto3
>>>dynamodb = boto3.resource('dynamodb')
I expect somehow boto3 is able to pick up the region that is already available in the EC2 instance.
There are two types of configuration data in boto3: credentials and non-credentials (including region). How boto3 reads them differs.
See:
Configuring Credentials
https://github.com/boto/boto3/issues/375
Specifically, boto3 retrieves credentials from the instance metadata service but not other configuration items (such as region).
So, you need to indicate which region you want. You can retrieve the current region from metadata and use it, if appropriate. Or use the environment variable AWS_DEFAULT_REGION.
You can pass region as a parameter to any boto3 resource.
dynamodb = boto3.resource('dynamodb', region_name='us-east-2')
I have this simple script that spits out the buckets in my GCS:
import boto
import gcs_oauth2_boto_plugin
import os
import shutil
import StringIO
import tempfile
import time
# URI scheme for Cloud Storage.
GOOGLE_STORAGE = 'gs'
# URI scheme for accessing local files.
LOCAL_FILE = 'file'
header_values = {"x-goog-project-id": "xxxxxxxxxxxx"}
uri = boto.storage_uri('', GOOGLE_STORAGE)
for bucket in uri.get_all_buckets(headers=header_values):
print bucket.name
The top of my ~/.boto file has the following (with real values for everything inside brackets):
# Google OAuth2 service account credentials (for "gs://" URIs):
gs_service_key_file = /home/pi/dev/camera/cl-camera-<id>.json
gs_service_client_id = '<user>#<id>.iam.gserviceaccount.com'
Everything works fine when running without sudo, but once I add sudo (I need access to GPIO pins since this is on a RPi), I get the following error:
Traceback (most recent call last):
File "gcs-test.py", line 24, in <module>
for bucket in uri.get_all_buckets(headers=header_values):
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 584, in get_all_buckets
conn = self.connect()
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 140, in connect
**connection_args)
File "/usr/local/lib/python2.7/dist-packages/boto/gs/connection.py", line 47, in __init__
suppress_consec_slashes=suppress_consec_slashes)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 191, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto/auth.py", line 1021, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 3 handlers were checked. ['OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials
Any ideas as to what's happening and why it's only when I run it with sudo?
I figure this one out. Since I'm running it as root now, it looks for the .boto file in a different place (/root/.boto instead of /home/pi/.boto), so I did the following to create a new config file and it worked:
$ sudo su
$ gsutil config -e
I have a flask application where I can run a script (with the help of Flask-script) that makes use of google api discovery using the code below:
app_script.py
import argparse
import csv
import httplib2
from apiclient import discovery
from oauth2client import client
from oauth2client.file import Storage
from oauth2client import tools
def get_auth_credentials():
flow = client.flow_from_clientsecrets(
'/path/to/client_screts.json', # file downloaded from Google Developers Console
scope='https://www.googleapis.com/auth/webmasters.readonly',
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
storage = Storage('/path/to/storage_file.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
parser = argparse.ArgumentParser(parents=[tools.argparser])
flags = parser.parse_args(['--noauth_local_webserver'])
credentials = tools.run_flow(flow=flow, storage=storage, flags=flags)
return credentials
def main():
credentials = get_auth_credentials()
http_auth = credentials.authorize(httplib2.Http())
# build the service object
service = discovery.build('webmasters', 'v3', http_auth)
Now the problem is every time I shutdown my computer upon booting and running the script again, I get the following error when trying to build the service object:
terminal:
$ python app.py runscript
No handlers could be found for logger "oauth2client.util"
Traceback (most recent call last):
File "app.py", line 5, in <module>
testapp.manager.run()
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/home/user/development/testproject/testapp/__init__.py", line 16, in runscript
metrics_collector.main()
File "/home/user/development/testproject/testapp/metrics_collector.py", line 177, in main
service = discovery.build('webmasters', 'v3', http_auth)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 206, in build
credentials=credentials)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 306, in build_from_document
base = urljoin(service['rootUrl'], service['servicePath'])
KeyError: 'rootUrl'
intalled:
google-api-python-client==1.4.2
httplib2==0.9.2
Flask==0.10.1
Flask-Script==2.0.5
The script runs sometimes*, but thats the problem I don't know why it runs sometimes and others doesn't
*What I tried to make it work was to, delete all the cookies, download the client_secrets.json from the Google Developers Console again, remove the storage_file.dat, remove all .pyc files from the project
Can anyone help me see what's going on?
From a little bit of research here, it seems that the No handlers could be found for logger "oauth2client.util" error can actually be masking a different error. You need to use the logging module and configure your system to output.
Solution
Just add the following to configure logging:
import logging
logging.basicConfig()
Other helpful/related posts
Python - No handlers could be found for logger "OpenGL.error"
SOLVED: Error trying to access "google drive" with python (google quickstart.py source code)
Thank you so much for the tip Avantol13, you were right there was an error being masked.
The problem was that the following line:
service = discovery.build('webmasters', 'v3', http_auth)
should have actually be:
service = discovery.build('webmasters', 'v3', http=http_auth)
All working now. Thanks