google-api-python-client broken because of OAuth2? - python

I am trying to check if a certain dataset exists in bigquery using the Google Api Client in Python. It always worked untill the last update where I got this strange error I don't know how to fix:
Traceback (most recent call last):
File "/root/miniconda/lib/python2.7/site-packages/dsUtils/bq_utils.py", line 106, in _get
resp = bq_service.datasets().get(projectId=self.project_id, datasetId=self.id).execute(num_retries=2)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/util.py", line 140, in positional_wrapper
return wrapped(*args, **kwargs)
File "/root/miniconda/lib/python2.7/site-packages/googleapiclient/http.py", line 755, in execute
method=str(self.method), body=self.body, headers=self.headers)
File "/root/miniconda/lib/python2.7/site-packages/googleapiclient/http.py", line 93, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 598, in new_request
self._refresh(request_orig)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 864, in _refresh
self._do_refresh_request(http_request)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 891, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/client.py", line 1597, in _generate_refresh_req
uest_body
assertion = self._generate_assertion()
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/service_account.py", line 263, in _generate_ass
ertion
key_id=self._private_key_id)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/crypt.py", line 97, in make_signed_jwt
signature = signer.sign(signing_input)
File "/root/miniconda/lib/python2.7/site-packages/oauth2client/_pycrypto_crypt.py", line 101, in sign
return PKCS1_v1_5.new(self._key).sign(SHA256.new(message))
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Signature/PKCS1_v1_5.py", line 112, in sign
m = self._key.decrypt(em)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 174, in decrypt
return pubkey.pubkey.decrypt(self, ciphertext)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/pubkey.py", line 93, in decrypt
plaintext=self._decrypt(ciphertext)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/PublicKey/RSA.py", line 235, in _decrypt
r = getRandomRange(1, self.key.n-1, randfunc=self._randfunc)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Util/number.py", line 123, in getRandomRange
value = getRandomInteger(bits, randfunc)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Util/number.py", line 104, in getRandomInteger
S = randfunc(N>>3)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 202, in read
return self._singleton.read(bytes)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 178, in read
return _UserFriendlyRNG.read(self, bytes)
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 137, in read
self._check_pid()
File "/root/miniconda/lib/python2.7/site-packages/Crypto/Random/_UserFriendlyRNG.py", line 153, in _check_pid
raise AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")
AssertionError: PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()
Is someone understanding what is hapening?
Note that I also get this error with other bricks like GCStorage.
Note also that I use the following command to load my Google credentials:
from oauth2client.client import GoogleCredentials
def get_credentials(credentials_path): #my json credentials path
logger.info('Getting credentials...')
try:
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credentials_path
credentials = GoogleCredentials.get_application_default()
return credentials
except Exception as e:
raise e
So if anyone know a better way to load my google credentials using my json service account file, and which would avoid the error, please tell me.

It looks like the error is in the PyCrypto module, which appears to be used under the hood by Google's OAuth2 implementation. If your code is calling os.fork() at some point, you may need to call Crypto.Random.atfork() afterward in both the parent and child process in order to update the module's internal state.
See here for PyCrypto docs; search for "atfork" for more info:
https://github.com/dlitz/pycrypto
This question and answer might also be relevant:
PyCrypto : AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")

Related

Can't read from a .env file

I'm new to python, and I am trying to build a little Reddit bot to get some practice.
My bot works just fine (tested many times) but once I try to move the credentials to a .env file it fails.
I've ran pip3 install python-dotenv
This is my code, just one file, removing irrelevant stuff:
import os
import praw
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
from dotenv import load_dotenv
load_dotenv()
cred = credentials.Certificate("serviceAccountKey.json")
firebase_admin.initialize_app(cred, {
'databaseURL': 'https://makideat-6a-default-db.firebaseio.com'
})
db_ref = db.reference()
reddit = praw.Reddit(
client_id=os.getenv('CLIENT_ID'),
client_secret=os.getenv('CLIENT_SECRET'),
username=os.getenv('USERNAME'),
password=os.getenv('PASSWORD'),
user_agent=os.getenv('USER_AGENT')
)
def summoned_reply(comment):
DO SOME STUFF...
messages = reddit.inbox.stream() # creates an iterable for your inbox and streams it
for message in messages: # iterates through your messages
try:
if message in reddit.inbox.mentions() and message in reddit.inbox.unread():
summoned_reply(message)
message.mark_read()
except praw.exceptions.APIException:
print("probably a rate limit....")
And this is the error I'm getting (line 56 is for message in messages:):
Traceback (most recent call last):
File "C:\Users\Yanay\Documents\Coding\NiceAndPretty\main.py", line 56, in <module>
for message in messages: # iterates through your messages
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\models\util.py", line 195, in stream_generator
for item in reversed(list(function(limit=limit, **function_kwargs))):
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\models\listing\generator.py", line 63, in __next__
self._next_batch()
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\models\listing\generator.py", line 89, in _next_batch
self._listing = self._reddit.get(self.url, params=self.params)
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\util\deprecate_args.py", line 43, in wrapped
return func(**dict(zip(_old_args, args)), **kwargs)
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\reddit.py", line 634, in get
return self._objectify_request(method="GET", params=params, path=path)
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\reddit.py", line 739, in _objectify_request
self.request(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\util\deprecate_args.py", line 43, in wrapped
return func(**dict(zip(_old_args, args)), **kwargs)
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\praw\reddit.py", line 941, in request
return self._core.request(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\sessions.py", line 330, in request
return self._request_with_retries(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\sessions.py", line 228, in _request_with_retries
response, saved_exception = self._make_request(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\sessions.py", line 185, in _make_request
response = self._rate_limiter.call(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\rate_limit.py", line 33, in call
kwargs["headers"] = set_header_callback()
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\sessions.py", line 283, in _set_header_callback
self._authorizer.refresh()
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\auth.py", line 425, in refresh
self._request_token(
File "C:\Users\Yanay\.virtualenvs\NiceAndPretty\lib\site-packages\prawcore\auth.py", line 158, in _request_token
raise OAuthException(
prawcore.exceptions.OAuthException: invalid_grant error processing request
Then, in the same directory I have a .env file in the following structure (changed characters for everything here of course, but haven't changed the structure or added or removed spaces from anywhere):
CLIENT_ID=gGATUW-Ea507LoQ
CLIENT_SECRET=_b6vCCHKBCqVHzlJin3sEQ
USERNAME=Maddeeat
PASSWORD=!Neatat!
USER_AGENT=Maddeeat:1.0.0 (by u/Maddeeat)

Why am I getting pymongo.errors.AutoReconnect: connection pool paused

I'm getting this error of autoreconnect, and there are around 100 connections in the logs during this call to .objects. Here is the document:
class NotificationDoc(Document):
patient_id = StringField(max_length=32)
type = StringField(max_length=32)
sms_sent = BooleanField(default=False)
email_sent = BooleanField(default=False)
sending_time = DateTimeField(default=datetime.utcnow)
def __unicode__(self):
return "{}_{}_{}".format(self.patient_id, self.type, self.sending_time.strftime('%Y-%m-%d'))
and this is the query call that causes the issue:
docs = NotificationDoc.objects(sending_time__lte=from_date)
This is the complete stacktrace. How can I understand what is going on?
pymongo.errors.AutoReconnect: connection pool paused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/app/src/reminder/configurations/app/worker.py", line 106, in schedule_recall
schedule_recall_use_case.execute()
File "/app/src/reminder/domain/notification/recall/use_cases/schedule_recall.py", line 24, in execute
notifications_sent = self.notifications_provider.find_recalled_notifications_for_date(no_recalls_before_date)
File "/app/src/reminder/data_providers/database/odm/repositories.py", line 42, in find_recalled_notifications_for_date
if NotificationDoc.objects(sending_time__lte=from_date).count() > 0:
File "/usr/local/lib/python3.8/site-packages/mongoengine/queryset/queryset.py", line 144, in count
return super().count(with_limit_and_skip)
File "/usr/local/lib/python3.8/site-packages/mongoengine/queryset/base.py", line 423, in count
count = count_documents(
File "/usr/local/lib/python3.8/site-packages/mongoengine/pymongo_support.py", line 38, in count_documents
return collection.count_documents(filter=filter, **kwargs)
File "/usr/local/lib/python3.8/site-packages/pymongo/collection.py", line 1502, in count_documents
return self.__database.client._retryable_read(
File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1307, in _retryable_read
with self._secondaryok_for_server(read_pref, server, session) as (
File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1162, in _secondaryok_for_server
with self._get_socket(server, session) as sock_info:
File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.8/site-packages/pymongo/mongo_client.py", line 1099, in _get_socket
with server.get_socket(
File "/usr/local/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 1371, in get_socket
sock_info = self._get_socket(all_credentials)
File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 1436, in _get_socket
self._raise_if_not_ready(emit_event=True)
File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 1407, in _raise_if_not_ready
_raise_connection_failure(
File "/usr/local/lib/python3.8/site-packages/pymongo/pool.py", line 250, in _raise_connection_failure
raise AutoReconnect(msg) from error
pymongo.errors.AutoReconnect: mongo:27017: connection pool paused
turned out to be celery is leaving connections open, so here is the solution
from mongoengine import disconnect
from celery.signals import task_prerun
#task_prerun.connect
def on_task_init(*args, **kwargs):
disconnect(alias='default')
connect(db, host=host, port=port, maxPoolSize=400, minPoolSize=200, alias='default')
This could be related to the fact that PyMongo is not fork-safe. If you're using a process pool in any way, which includes server software like uWSGI and even some configurations of popular ASGI servers.
Every time you run a query in PyMongo, that MongoClient object becomes fork-unsafe. A MongoClient object that has never run any queries is fork-safe. Created Database and Collection objects will reference back to their parent MongoClient without checking for existence.
I do see you are using MongoEngine, which uses PyMongo underneath. I don't know the semantics of that particular library but I would assume they are the same.
In short: you must re-create your connections as part of your forking process.
pip install -U pymongo; is already fix; see this https://github.com/mongodb/mongo-python-driver/pull/944

google cloud logging not working while working with python

python code.
import google.cloud.logging
client = google.cloud.logging.Client.from_service_account_json("file.config")
client.setup_logging()
import logging
loggin.info("error")
traceback:
Traceback (most recent call last):
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/grpc/_channel.py", line 923, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/grpc/_channel.py", line 826, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.PERMISSION_DENIED
details = "The caller does not have permission"
debug_error_string = "{"created":"#1612798593.245379000","description":"Error received from peer ipv4:142.250.71.42:443","file":"src/core/lib/surface/call.cc","file_line":1062,"grpc_message":"The caller does not have permission","grpc_status":7}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/handlers/transports/background_thread.py", line 123, in _safely_commit_batch
batch.commit()
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/logger.py", line 383, in commit
client.logging_api.write_entries(entries, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging/_gapic.py", line 121, in write_entries
self._gapic_api.write_log_entries(
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/cloud/logging_v2/gapic/logging_service_v2_client.py", line 476, in write_log_entries
return self._inner_api_calls["write_log_entries"](
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/retry.py", line 281, in retry_wrapped_func
return retry_target(
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/Users/soubhagyapradhan/Desktop/upwork/baby/data-science/env/lib/python3.8/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.PermissionDenied: 403 The caller does not have permission
Here i am trying to use google logging, But i am getting above error.
Please take a look
I am doing this using python. Is there any issue on generating service account creation
As John said, have you checked if the Service Account has the proper role assigned? As the official documentation says: "Using Cloud Logging library for Python requires the IAM Logs Writer role on Google Cloud. Most Google Cloud environments provide this role by default".
On the other hand, I am bit curious you used the "google-cloud-storage" tag, however you are not mention something related to it. Have you checked if the issue is because you do not have the enough permissions (As Storage Admin) to access your bucket?
I was using :
client = google.cloud.logging.Client()
(not ".from_service_account_json")
and I was getting the exact same error message. I also had the proper role roles/logging.logWriter. I am using using cloud-logging 2.6.0
I found that when I ran the code above in Vertex AI training job, if I don't create a client using my project_id in the following way:
client = google.cloud.logging.Client(project=<user project number>)
then google.cloud.logging use some kind of dummy project_id i.e "i285ca2410679d8f1p-tp" which don't have the necessay access. Putting my project_id and all the error messages are then gone.

_gdbm.error: Database needs recovery -- after running out of storage while fetching api data

I really don't know how to help myself, being unfamiliar with this kind of error, and not finding anything on the Google landscape really. My last hope is one of you guys since I don't know where else to go with this. I tried reinstalling all libraries and setting up a new venv. For more action I don't trust myself enough in these kinds of things.
The code triggering the error:
from wetterdienst import DWDObservationData
observations_daily = DWDObservationData(
station_ids=station_ids_d,
parameter=params_daily,
time_resolution=TimeResolution.DAILY,
start_date="2015-01-01",
end_date="2020-10-10",
tidy_data=True,
humanize_column_names=True,
)
for df in observations_hourly.collect_data():
name = str(df.STATION_ID.iloc[0]).strip(".0")
df.to_csv('./data/hourly/{}.csv'.format(name))
print('{} done'.format(name))
API is found here: https://github.com/earthobservations/wetterdienst
Error:
Traceback (most recent call last):
File "/Users/sashakaun/PycharmProjects/wetter2.0/main.py", line 83, in <module>
for df in observations_hourly.collect_data():
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/api.py", line 178, in collect_data
df_parameter = self._collect_parameter_from_station(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/api.py", line 243, in _collect_parameter_from_station
df_period = collect_climate_observations_data(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 82, in collect_climate_observations_data
filenames_and_files = download_climate_observations_data_parallel(remote_files)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 106, in download_climate_observations_data_parallel
return list(zip(remote_files, files_in_bytes))
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 611, in result_iterator
yield fs.pop().result()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/_base.py", line 388, in __get_result
raise self._exception
File "/usr/local/Cellar/python#3.8/3.8.5/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/wetterdienst/dwd/observations/access.py", line 124, in _download_climate_observations_data
return BytesIO(__download_climate_observations_data(remote_file=remote_file))
File "<decorator-gen-2>", line 2, in __download_climate_observations_data
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 1356, in get_or_create_for_user_func
return self.get_or_create(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 954, in get_or_create
with Lock(
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 185, in __enter__
return self._enter()
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 94, in _enter
generated = self._enter_create(value, createdtime)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/lock.py", line 178, in _enter_create
return self.creator()
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/region.py", line 920, in gen_value
self.backend.set(key, value)
File "/Users/sashakaun/PycharmProjects/wetter2.0/venv/lib/python3.8/site-packages/dogpile/cache/backends/file.py", line 239, in set
dbm[key] = pickle.dumps(value, pickle.HIGHEST_PROTOCOL)
_gdbm.error: Database needs recovery
Thanks a lot!!
A GDBM file has been corrupted. You need to use gdbmtool to recover the database. Install gdbmtool then run
gdbmtool FILENAME
Where FILENAME is the name of the GDBM database. A prompt will appear, then you can enter
gdbmtool> recover summary
If the database can be recovered it will display a summary of the recovery results, eg:
Recovery succeeded.
Keys recovered: 6870650, failed: 5, duplicate: 0
Buckets recovered: 64830, failed: 2

issue with gdata analytics client in python

I was able to successfully retrieve an OAuth access token for Google Analytics using Google's gdata python library.
However, my attempt to use the token to access Google Analytics data is failing. Here is the relevant code snippet:
client = gdata.analytics.client.AnalyticsClient(source='myapp')
client.auth_token = access_token # retrieved earlier
dataQuery = gdata.analytics.client.DataFeedQuery({
'ids': 'ga:********',
'start-date': '2011-03-23',
'end-date': '2011-04-04',
'metrics': 'ga:percentNewVisits',
'max-results': 50})
data = client.GetDataFeed(dataQuery)
I get the following stacktrace:
Traceback (most recent call last):
File
"/Library/Python/2.6/site-packages/django/core/servers/basehttp.py",
line 280, in run
self.result = application(self.environ,
self.start_response)
File
"/Library/Python/2.6/site-packages/django/core/servers/basehttp.py",
line 674, in call
return self.application(environ, start_response)
File
"/Library/Python/2.6/site-packages/django/core/handlers/wsgi.py",
line 248, in call
response = self.get_response(request)
File
"/Library/Python/2.6/site-packages/django/core/handlers/base.py",
line 141, in get_response
return self.handle_uncaught_exception(request,
resolver, sys.exc_info())
File
"/Library/Python/2.6/site-packages/django/core/handlers/base.py",
line 100, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File
"/Library/Python/2.6/site-packages/django/contrib/auth/decorators.py",
line 25, in _wrapped_view
return view_func(request, *args, **kwargs)
File
"/Users/***/***/**/***/**/googleAnalyticsOauth.py",
line 122, in googleAnalyticsTest
data = client.GetDataFeed(dataQuery)
File
"build/bdist.macosx-10.6-universal/egg/gdata/analytics/client.py",
line 77, in get_data_feed
**kwargs)
File
"build/bdist.macosx-10.6-universal/egg/gdata/client.py",
line 635, in get_feed
**kwargs)
File
"build/bdist.macosx-10.6-universal/egg/gdata/client.py",
line 265, in request
uri=uri, auth_token=auth_token, http_request=http_request, **kwargs)
File
"build/bdist.macosx-10.6-universal/egg/atom/client.py",
line 110, in request
self.auth_token.modify_request(http_request)
File
"build/bdist.macosx-10.6-universal/egg/gdata/gauth.py",
line 980, in modify_request
token_secret=self.token_secret, verifier=self.verifier)
File
"build/bdist.macosx-10.6-universal/egg/gdata/gauth.py",
line 604, in generate_hmac_signature
next, token, verifier=verifier)
File
"build/bdist.macosx-10.6-universal/egg/gdata/gauth.py",
line 565, in build_oauth_base_string
urllib.quote(params[key], safe='~')))
File
"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/urllib.py",
line 1216, in quote
res = map(safe_map.getitem, s)
TypeError: argument 2 to map() must
support iteration
Anyone have any ideas what could be going wrong?
Thanks!
The problem is in gauth.py (part of the gdata client library) at about line 587 in version 2.0.15.
The value for the "max-results" parameter being passed to urllib.quote is 10000, an integer, not a string, so it doesn't have an iterator.
My quick hacky fix is:
for key in sorted_keys:
safe_str_param = urllib.quote(str(params[key]), safe='~')
pairs.append('%s=%s' % (urllib.quote(key, safe='~'), safe_str_param))
You can track down the problem yourself by using pdb, like this:
python -m pdb pagination_demo.py
> ga-api-http-samples-read-only/src/data_export/v2/python/pagination/pagination_demo.py(35)<module>()
-> """ """
# Note that (Pdb) indicates a prompt from the debugger
(Pdb) c
Executing query: https://www.google.com/analytics/feeds/data?max-results=10000&...&start-date=2011-01-01&ids=ga%3A999999&metrics=ga%3Apageviews&end-date=2011-12-30
# then you get more or less your trace above, plus this:
TypeError: argument 2 to map() must support iteration
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> lib/python2.6/urllib.py(1224)quote()
-> res = map(safe_map.__getitem__, s)
# Ok, let's see what the type of 's' is with pretty print
(Pdb) pp type(s)
<type 'int'>
(Pdb) q
If everything works (like when you add my hack above to gauth.py), you'll see this instead of the stack trace:
Total results found: 124
Total pages needed, with one page per API request: 1
The program finished and will be restarted
> ga-api-http-samples-read-only/src/data_export/v2/python/pagination/pagination_demo.py(35)<module>()
-> """
(Pdb) q
FWIW, one has to do the following:
my_client.auth_token = gdata.gauth.OAuthHmacToken(CONSUMER_KEY, CONSUMER_SECRET, TOKEN, TOKEN_SECRET, gdata.gauth.ACCESS_TOKEN)
Via

Categories