I have this simple script that spits out the buckets in my GCS:
import boto
import gcs_oauth2_boto_plugin
import os
import shutil
import StringIO
import tempfile
import time
# URI scheme for Cloud Storage.
GOOGLE_STORAGE = 'gs'
# URI scheme for accessing local files.
LOCAL_FILE = 'file'
header_values = {"x-goog-project-id": "xxxxxxxxxxxx"}
uri = boto.storage_uri('', GOOGLE_STORAGE)
for bucket in uri.get_all_buckets(headers=header_values):
print bucket.name
The top of my ~/.boto file has the following (with real values for everything inside brackets):
# Google OAuth2 service account credentials (for "gs://" URIs):
gs_service_key_file = /home/pi/dev/camera/cl-camera-<id>.json
gs_service_client_id = '<user>#<id>.iam.gserviceaccount.com'
Everything works fine when running without sudo, but once I add sudo (I need access to GPIO pins since this is on a RPi), I get the following error:
Traceback (most recent call last):
File "gcs-test.py", line 24, in <module>
for bucket in uri.get_all_buckets(headers=header_values):
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 584, in get_all_buckets
conn = self.connect()
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 140, in connect
**connection_args)
File "/usr/local/lib/python2.7/dist-packages/boto/gs/connection.py", line 47, in __init__
suppress_consec_slashes=suppress_consec_slashes)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 191, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto/auth.py", line 1021, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 3 handlers were checked. ['OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials
Any ideas as to what's happening and why it's only when I run it with sudo?
I figure this one out. Since I'm running it as root now, it looks for the .boto file in a different place (/root/.boto instead of /home/pi/.boto), so I did the following to create a new config file and it worked:
$ sudo su
$ gsutil config -e
Related
My goal is to be able to run Python program using boto3 to access DynamoDB without any local configuration. I've been following this AWS document https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html) and it seems to be feasible using the 'IAM role' option https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html#iam-role. This means I don't have anything configured locally.
However, as I attached a role with DynamoDB access permission to the EC2 instance the Python program is running and ran boto3.resources('dynamodb') I kept getting the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/__init__.py", line 100, in resource
return _get_default_session().resource(*args, **kwargs)
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/session.py", line 389, in resource
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/boto3/session.py", line 263, in client
aws_session_token=aws_session_token, config=config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/session.py", line 839, in create_client
client_config=config, api_version=api_version)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 86, in create_client
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 328, in _get_client_args
verify, credentials, scoped_config, client_config, endpoint_bridge)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/args.py", line 47, in get_client_args
endpoint_url, is_secure, scoped_config)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/args.py", line 117, in compute_client_args
service_name, region_name, endpoint_url, is_secure)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/client.py", line 402, in resolve
service_name, region_name)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/regions.py", line 122, in construct_endpoint
partition, service_name, region_name)
File "/home/ubuntu/.local/lib/python3.6/site-packages/botocore/regions.py", line 135, in _endpoint_for_partition
raise NoRegionError()
botocore.exceptions.NoRegionError: You must specify a region.
I've searched the internet and it seems most of the solutions pointing to have local configuration (e.g. ~/.aws/config, boto3 config file, etc.).
Also, I have verified that from EC2 instance, I am able to get the region from instance metadata:
$ curl --silent http://169.254.169.254/latest/dynamic/instance-identity/document
{
...
"region" : "us-east-2",
...
}
My workaround right now is to provide an environment variable AWS_DEFAULT_REGION passing via Docker command line.
Here is the simple code I have to replicate the issue:
>>>import boto3
>>>dynamodb = boto3.resource('dynamodb')
I expect somehow boto3 is able to pick up the region that is already available in the EC2 instance.
There are two types of configuration data in boto3: credentials and non-credentials (including region). How boto3 reads them differs.
See:
Configuring Credentials
https://github.com/boto/boto3/issues/375
Specifically, boto3 retrieves credentials from the instance metadata service but not other configuration items (such as region).
So, you need to indicate which region you want. You can retrieve the current region from metadata and use it, if appropriate. Or use the environment variable AWS_DEFAULT_REGION.
You can pass region as a parameter to any boto3 resource.
dynamodb = boto3.resource('dynamodb', region_name='us-east-2')
I'm trying to connect to host and run command with module Fabric 2 and have this error:
Traceback (most recent call last):
File "Utilities/fabfile.py", line 4, in <module>
res.run('uname -s')
File "<decorator-gen-3>", line 2, in run
File "/usr/local/lib/python2.7/dist-packages/fabric/connection.py", line 29, in opens
self.open()
File "/usr/local/lib/python2.7/dist-packages/fabric/connection.py", line 501, in open
self.client.connect(**kwargs)
File "/home/trishnevskaya/.local/lib/python2.7/site-packages/paramiko/client.py", line 424, in connect
passphrase,
File "/home/username/.local/lib/python2.7/site-packages/paramiko/client.py", line 715, in _auth
raise SSHException('No authentication methods available')
paramiko.ssh_exception.SSHException: No authentication methods available
Simple code from docs (http://docs.fabfile.org/en/latest/getting-started.html):
from fabric import Connection
res = Connection('<host-ip>')
res.run('uname -s')
Accoding to docs, I don't need in special configs, but it's doesn't work...
fabric 2.1.3
python 2.7.14
Following works for me.
connect_kwargs = {"key_filename":['PATH/KEY.pem']}
with Connection(host="EC2", user="ubuntu", connect_kwargs=connect_kwargs) as c:
c.run("mkdir abds")
I run into the same issue. Rather than passing a SSH keyfile, as suggested previously, another trivial way to sort it out might be to pass a password (that would be fine just over the test/development stage).
import getpass
from fabric import Connection, Config
sudo_pass = getpass.getpass("What's your user password?\n")
config = Config(overrides={'user': '<host-user>', 'connect_kwargs': {'password': sudo_pass}})
c = Connection('<host-ip>', config=config)
c.run('uname -s')
I have a flask application where I can run a script (with the help of Flask-script) that makes use of google api discovery using the code below:
app_script.py
import argparse
import csv
import httplib2
from apiclient import discovery
from oauth2client import client
from oauth2client.file import Storage
from oauth2client import tools
def get_auth_credentials():
flow = client.flow_from_clientsecrets(
'/path/to/client_screts.json', # file downloaded from Google Developers Console
scope='https://www.googleapis.com/auth/webmasters.readonly',
redirect_uri='urn:ietf:wg:oauth:2.0:oob')
storage = Storage('/path/to/storage_file.dat')
credentials = storage.get()
if credentials is None or credentials.invalid:
parser = argparse.ArgumentParser(parents=[tools.argparser])
flags = parser.parse_args(['--noauth_local_webserver'])
credentials = tools.run_flow(flow=flow, storage=storage, flags=flags)
return credentials
def main():
credentials = get_auth_credentials()
http_auth = credentials.authorize(httplib2.Http())
# build the service object
service = discovery.build('webmasters', 'v3', http_auth)
Now the problem is every time I shutdown my computer upon booting and running the script again, I get the following error when trying to build the service object:
terminal:
$ python app.py runscript
No handlers could be found for logger "oauth2client.util"
Traceback (most recent call last):
File "app.py", line 5, in <module>
testapp.manager.run()
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run
result = self.handle(sys.argv[0], sys.argv[1:])
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle
res = handle(*args, **config)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/flask_script/commands.py", line 216, in __call__
return self.run(*args, **kwargs)
File "/home/user/development/testproject/testapp/__init__.py", line 16, in runscript
metrics_collector.main()
File "/home/user/development/testproject/testapp/metrics_collector.py", line 177, in main
service = discovery.build('webmasters', 'v3', http_auth)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 206, in build
credentials=credentials)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/home/user/.virtualenvs/testproject/local/lib/python2.7/site-packages/googleapiclient/discovery.py", line 306, in build_from_document
base = urljoin(service['rootUrl'], service['servicePath'])
KeyError: 'rootUrl'
intalled:
google-api-python-client==1.4.2
httplib2==0.9.2
Flask==0.10.1
Flask-Script==2.0.5
The script runs sometimes*, but thats the problem I don't know why it runs sometimes and others doesn't
*What I tried to make it work was to, delete all the cookies, download the client_secrets.json from the Google Developers Console again, remove the storage_file.dat, remove all .pyc files from the project
Can anyone help me see what's going on?
From a little bit of research here, it seems that the No handlers could be found for logger "oauth2client.util" error can actually be masking a different error. You need to use the logging module and configure your system to output.
Solution
Just add the following to configure logging:
import logging
logging.basicConfig()
Other helpful/related posts
Python - No handlers could be found for logger "OpenGL.error"
SOLVED: Error trying to access "google drive" with python (google quickstart.py source code)
Thank you so much for the tip Avantol13, you were right there was an error being masked.
The problem was that the following line:
service = discovery.build('webmasters', 'v3', http_auth)
should have actually be:
service = discovery.build('webmasters', 'v3', http=http_auth)
All working now. Thanks
I am following the example in https://developers.google.com/storage/docs/gspythonlibrary#credentials
I created client/secret pair by choosing in the dev. console "create new client id", "installed application", "other".
I have the following code in my python script:
import boto
from gcs_oauth2_boto_plugin.oauth2_helper import SetFallbackClientIdAndSecret
CLIENT_ID = 'my_client_id'
CLIENT_SECRET = 'xxxfoo'
SetFallbackClientIdAndSecret(CLIENT_ID, CLIENT_SECRET)
uri = boto.storage_uri('foobartest2014', 'gs')
header_values = {"x-goog-project-id": proj_id}
uri.create_bucket(headers=header_values)
and it fails with the following error:
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 555, in create_bucket
conn = self.connect()
File "/usr/local/lib/python2.7/dist-packages/boto/storage_uri.py", line 140, in connect
**connection_args)
File "/usr/local/lib/python2.7/dist-packages/boto/gs/connection.py", line 47, in __init__
suppress_consec_slashes=suppress_consec_slashes)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 572, in __init__
host, config, self.provider, self._required_auth_capability())
File "/usr/local/lib/python2.7/dist-packages/boto/auth.py", line 883, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 3 handlers were checked. ['OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials
I have been struggling with this for the last couple of days, turns out the boto stuff, and that gspythonlibrary are all totally obsolete.
The latest example code showing how to use/authenticate Google Cloud Storage is here:
https://github.com/GoogleCloudPlatform/python-docs-samples/tree/master/storage/api
You need to provide a client/secret pair in a .boto file, and then run gsutil config.
It will create a refresh token, and then should work!
For more info, see https://developers.google.com/storage/docs/gspythonlibrary#credentials
U can also make console application for gsutil commands authentication and gsutil cp, rm, gsutil config -a pass through console application to cloud SDK then execute
I'm currently hosting a SimpleHTTPServer locally and have cd'd it into my server folder. I'm trying to use afplay to get it to play audio that is currently located in the server folder. This is my code:
import subprocess
import os
import urllib
import urllib2
urllib.urlretrieve('http://0.0.0.0:8000', r'Daybreak.mp3')
return_code = subprocess.call(["afplay", audio_file])
The file is located in the folder the server is currently hosting from and I am getting GET requests in Terminal. This is the error I get:
File "<stdin>", line 1, in <module>
File "<string>", line 11, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py", line 94, in urlretrieve
return _urlopener.retrieve(url, filename, reporthook, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib.py", line 244, in retrieve
tfp = open(filename, 'wb')
IOError: [Errno 13] Permission denied: 'Daybreak.mp3'
I'm not sure why permission is denied as I have got read and write access to the folder.
You might have got read and write access to the folder. But you are not specifying your access credentials in the python script. Python has no way of knowing that you are authorized.
You may want to use some authorization method, like HTTPPasswordMgrWithDefaultRealm().
serverUrl = "..."
ID = "..."
accessToken = "..."
p = urllib2.HTTPPasswordMgrWithDefaultRealm()
p.add_password(None, serverUrl, ID, accessToken)
handler = urllib2.HTTPBasicAuthHandler(p)
opener = urllib2.build_opener(handler)
urllib2.install_opener(opener)
req = urllib2.urlretrieve()