I'm using imaplib to query Gmail's IMAP, but some requests are taking more than 60 seconds to return. This is already done in a task, so I have a full 10 minutes to do the request, but my tasks are failing due to the 60 second limit on urlfetch.
I've tried setting urlfetch.set_default_fetch_deadline(600), but it doesn't seem to do anything.
Here's a stacktrace:
The API call remote_socket.Receive() took too long to respond and was cancelled.
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 760, in uid
typ, dat = self._simple_command(name, command, *args)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1070, in _simple_command
return self._command_complete(name, self._command(name, *args))
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 897, in _command_complete
typ, data = self._get_tagged_response(tag)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 999, in _get_tagged_response
self._get_response()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 916, in _get_response
resp = self._get_line()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1009, in _get_line
line = self.readline()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1171, in readline
return self.file.readline()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/socket.py", line 445, in readline
data = self._sock.recv(self._rbufsize)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ssl.py", line 301, in recv
return self.read(buflen)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ssl.py", line 220, in read
return self._sslobj.read(len)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/remote_socket/_remote_socket.py", line 864, in recv
return self.recvfrom(buffersize, flags)[0]
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/remote_socket/_remote_socket.py", line 903, in recvfrom
apiproxy_stub_map.MakeSyncCall('remote_socket', 'Receive', request, reply)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
DeadlineExceededError: The API call remote_socket.Receive() took too long to respond and was cancelled.
Which kind of DeadlineExceededError?
There are three kinds of DeadlineExceededError in AppEngine.
https://developers.google.com/appengine/articles/deadlineexceedederrors
google.appengine.runtime.DeadlineExceededError: raised if the overall request times out, typically after 60 seconds, or 10 minutes
for task queue requests.
google.appengine.runtime.apiproxy_errors.DeadlineExceededError: raised if an RPC exceeded its deadline. This is typically 5 seconds,
but it is settable for some APIs using the 'deadline' option.
google.appengine.api.urlfetch_errors.DeadlineExceededError: raised if the URLFetch times out.
As you can see. The 10mins limit of taskqueue only help thegoogle.appengine.runtime.DeadlineExceededError. The type of DeadlineExceededError can be identified via traceback (list below). In this case, it is google.appengine.runtime.apiproxy_errors.DeadlineExceededError. Which will raise in 5secs by default. (I will update the post after figure out how to change it)
TYPE 1. google.appengine.runtime.DeadlineExceededError
The traceback looks like
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 266, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~tagtooadex2/test.371204033771063679/index.py", line 9, in get
pass
DeadlineExceededError
Solution
This exception can be solved by using taskqueue (10mins), backend or manual scaling options.
https://developers.google.com/appengine/docs/python/modules/#Python_Instance_scaling_and_class
TYPE 2. google.appengine.runtime.apiproxy_errors.DeadlineExceededError
The traceback looks like
DeadlineExceededError: The API call remote_socket.Receive() took too long to respond and was cancelled.
TYPE 3. google.appengine.api.urlfetch_errors.DeadlineExceededError
The traceback looks like
DeadlineExceededError: Deadline exceeded while waiting for HTTP response from URL: http://www.sogi.com.tw/newforum/article_list.aspx?topic_ID=6089521
Solution:
This exception can be solved by extend the deadline.
urlfetch.fetch(url, deadline=10*60)
https://developers.google.com/appengine/docs/python/urlfetch/fetchfunction
There's no mentioning of timeout in imaplib sources. So there are several options:
IMAP uses socket library to establish the connection, consider using
socket.setdefaulttimeout(timeoutValue) but if you do so be
aware of side-effects.
The second option is to create your own IMAP4 class child with
tunable timeout, shall we say in open function
From the Google App Engine documentation, it seems like there are many
possible causes for DeadlineExceededError.
In your case, it seems that it may be one of the last two (out of three) types of DeadlineExceededError presented on the page.
Related
Python 3.6. I use the streamer of tweepy to get tweets. It works well. But sometimes, if I let it open for more than 24h, I have this error
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 277, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\SSL.py", line 1547, in recv_into
self._raise_ssl_error(self._ssl, result)
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\SSL.py", line 1353, in _raise_ssl_error
raise WantReadError()
OpenSSL.SSL.WantReadError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 277, in recv_into
return self.connection.recv_into(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\SSL.py", line 1547, in recv_into
self._raise_ssl_error(self._ssl, result)
File "C:\ProgramData\Anaconda3\lib\site-packages\OpenSSL\SSL.py", line 1370, in _raise_ssl_error
raise SysCallError(errno, errorcode.get(errno))
OpenSSL.SSL.SysCallError: (10054, 'WSAECONNRESET')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\response.py", line 302, in _error_catcher
yield
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\response.py", line 384, in read
data = self._fp.read(amt)
File "C:\ProgramData\Anaconda3\lib\http\client.py", line 449, in read
n = self.readinto(b)
File "C:\ProgramData\Anaconda3\lib\http\client.py", line 483, in readinto
return self._readinto_chunked(b)
File "C:\ProgramData\Anaconda3\lib\http\client.py", line 578, in _readinto_chunked
chunk_left = self._get_chunk_left()
File "C:\ProgramData\Anaconda3\lib\http\client.py", line 546, in _get_chunk_left
chunk_left = self._read_next_chunk_size()
File "C:\ProgramData\Anaconda3\lib\http\client.py", line 506, in _read_next_chunk_size
line = self.fp.readline(_MAXLINE + 1)
File "C:\ProgramData\Anaconda3\lib\socket.py", line 586, in readinto
return self._sock.recv_into(b)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 293, in recv_into
return self.recv_into(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\contrib\pyopenssl.py", line 282, in recv_into
raise SocketError(str(e))
OSError: (10054, 'WSAECONNRESET')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:\ProgramData\Anaconda3\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "twitter_aspi_v0.8.py", line 179, in _init_stream
tweepy.Stream(auth, listener).userstream()
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 396, in userstream
self._start(async)
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 363, in _start
self._run()
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 296, in _run
raise exception
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 265, in _run
self._read_loop(resp)
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 315, in _read_loop
line = buf.read_line().strip()
File "C:\ProgramData\Anaconda3\lib\site-packages\tweepy\streaming.py", line 180, in read_line
self._buffer += self._stream.read(self._chunk_size)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\response.py", line 401, in read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
File "C:\ProgramData\Anaconda3\lib\contextlib.py", line 100, in __exit__
self.gen.throw(type, value, traceback)
File "C:\ProgramData\Anaconda3\lib\site-packages\requests\packages\urllib3\response.py", line 320, in _error_catcher
raise ProtocolError('Connection broken: %r' % e, e)
requests.packages.urllib3.exceptions.ProtocolError: ('Connection broken: OSError("(10054, \'WSAECONNRESET\')",)', OSError("(10054, 'WSAECONNRESET')",))
My code is pretty long and regarding the error, it seems it comes from urllib3, OpenSSL and tweepy way of accessing the Twitter API. So I could handle this with a try before launching the streamer but I would like to know if there is maybe a better fix I could do to understand and avoid this? Thanks!
This looks more like a temporary connection timeout which is not handled by Tweepy, so you should just write a wrapper around the same and catch the exception and restart it. I don't think the exception can be avoided as such because you connect to an external site and sometimes it could timeout
You should look at this http://docs.tweepy.org/en/v3.5.0/streaming_how_to.html#handling-errors for the error handling part and see if on_error gets called in your case when connection timeout happens
class MyStreamListener(tweepy.StreamListener):
def on_error(self, status_code):
if status_code == 420:
#returning False in on_data disconnects the stream
return False
If this doesn't help then use the wrapper approach
According to Twitter Developer documentation : rate limiting this is expected to connection reset/failure when you cross your usage limit.
Requests / 15-min window (user auth) = 900
Requests / 15-min window (app auth) = 1500
Also it clearly states as below.
If the initial reconnect attempt is unsuccessful, your client should continue attempting to reconnect using an exponential back-off pattern until it successfully reconnects.
(Update)
Regardless of how your client gets disconnected, you should configure
your app to reconnect immediately. If your first reconnection attempt
is unsuccessful, we recommend that your app implement an exponential
back-off pattern in subsequent reconnection attempts (e.g. wait 1
second, then 2 seconds, then 4, 8, 16, etc), with some reasonable
upper limit. If this upper limit is reached, you should configure your
client to notify your team so that you can investigate further.
The standard (free) Twitter APIs i.e Tweepy API consist of a REST API and a Streaming API. The Streaming API provides low-latency access to Tweets. Ads API has other limits when white listed.
REST API Limit
Clients may access a theoretical maximum of 3,200 statuses via the
page and count parameters for the user_timeline REST API methods.
Other timeline methods have a theoretical maximum of 800 statuses.
Requests for more than the limit will result in a reply with a status
code of 200 and an empty result in the format requested. Twitter still
maintains a database of all the Tweets sent by a user. However, to
ensure performance, this limit is in place on the API calls.
Could be simple reason that users should not spam that this thing is enforced.
Solution :
You may catch exception and re-establish connection to Twitter and continue reading tweets.
There is no alternative than getting better usage allowance from Twitter as of now unfortunately.
I am using this python script to migrate data from one ElastiCache redis instance to another. It uses the redis pipelining to migrate data in chunks.
https://gist.github.com/thomasst/afeda8fe80534a832607
But I am getting this strange error:
Traceback (most recent call last):########### | ETA: 0:00:12
File "migrate-redis.py", line 95, in <module>
migrate()
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 664, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 644, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 837, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 464, in invoke
return callback(*args, **kwargs)
File "migrate-redis.py", line 74, in migrate
results = pipeline.execute(False)
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 2593, in execute
return execute(conn, stack, raise_on_error)
File "/usr/local/lib/python2.7/dist-packages/redis/client.py", line 2446, in _execute_transaction
all_cmds = connection.pack_commands([args for args, _ in cmds])
File "/usr/local/lib/python2.7/dist-packages/redis/connection.py", line 637, in pack_commands
output.append(SYM_EMPTY.join(pieces))
MemoryError
There are no issues with RAM as node has 6 GB of RAM.
The Memory Profile of source redis is as follows:
used_memory:1483900120
used_memory_human:1.38G
used_memory_rss:1945829376
used_memory_peak:2431795528
used_memory_peak_human:2.26G
used_memory_lua:86016
mem_fragmentation_ratio:1.31
mem_allocator:jemalloc-3.6.0
What can be the possible cause for this ?
From your error log, It has no relation with your redis server. The error happens in your redis client when it pack all commands into a memory buffer.
Maybe you could try to decrease the SCAN count option in your migrate-redis.py to test if it is too large to pack it.
I have created a backend for my google app that looks like this:
backends:
- name: dbops
options: dynamic
and I've created an admin handler for it:
- url: /backend/.*
script: backend.app
login: admin
Now I understand that admin jobs should be able to run forever and I'm launching this job with a TaskQueue, but for some reason mine is not. My job is simply creating a summary table in datastore from a much larger table. This table holds about 12000 records and it takes several minutes for it to process the job on the development server, but it works fine. When I push the code out to appspot and try to get it to run the same job, I'm getting what looks like datastore timeouts.
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1536, in __call__
rv = self.handle_exception(request, response, e)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1530, in __call__
rv = self.router.dispatch(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1278, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 1102, in __call__
return handler.dispatch()
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 572, in dispatch
return self.handle_exception(e, self.app.debug)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.5.1/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~myzencoder/dbops.362541511260492787/backend.py", line 626, in get
for asset in assets:
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/db/__init__.py", line 2314, in next
return self.__model_class.from_entity(self.__iterator.next())
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2816, in next
next_batch = self.__batcher.next()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2678, in next
return self.next_batch(self.AT_LEAST_ONE)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2715, in next_batch
batch = self.__next_batch.get_result()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 604, in get_result
return self.__get_result_hook(self)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_query.py", line 2452, in __query_result_hook
self._batch_shared.conn.check_rpc_success(rpc)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1224, in check_rpc_success
raise _ToDatastoreError(err)
Timeout: The datastore operation timed out, or the data was temporarily unavailable.
Anyone got any suggestions on how to make this work?
While the backend request can run for a long time, a query can only run for 60 sec. You'll have to loop over your query results with cursors.
Mapreduce will get you a result quicker by doing the queries in parallel.
In production you use the HR datastore and you can run into contention problems. See this article.
https://developers.google.com/appengine/articles/scaling/contention?hl=nl
And have a look at mapreduce for creating a report. Maybe this is a better solution.
we're trying to heavily use MapReduce in our project.
Now we have this problem, there is a lots of 'DeadlineExceededError' errors in the log...
One example of it ( traceback differs each time a bit ) :
Traceback (most recent call last):
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 207, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/python27_runtime/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~sba/1.362471299468574812/mapreduce/base_handler.py", line 65, in post
self.handle()
File "/base/data/home/apps/s~sba/1.362471299468574812/mapreduce/handlers.py", line 208, in handle
ctx.flush()
File "/base/data/home/apps/s~sba/1.362471299468574812/mapreduce/context.py", line 333, in flush
pool.flush()
File "/base/data/home/apps/s~sba/1.362471299468574812/mapreduce/context.py", line 221, in flush
self.__flush_ndb_puts()
File "/base/data/home/apps/s~sba/1.362471299468574812/mapreduce/context.py", line 239, in __flush_ndb_puts
ndb.put_multi(self.ndb_puts.items, config=self.__create_config())
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/model.py", line 3625, in put_multi
for future in put_multi_async(entities, **ctx_options)]
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 323, in get_result
self.check_success()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 318, in check_success
self.wait()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 302, in wait
if not ev.run1():
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/eventloop.py", line 219, in run1
delay = self.run0()
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/eventloop.py", line 181, in run0
callback(*args, **kwds)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/tasklets.py", line 365, in _help_tasklet_along
value = gen.send(val)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/ext/ndb/context.py", line 274, in _put_tasklet
keys = yield self._conn.async_put(options, datastore_entities)
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1560, in async_put
for pbs, indexes in pbsgen:
File "/base/python27_runtime/python27_lib/versions/1/google/appengine/datastore/datastore_rpc.py", line 1350, in __generate_pb_lists
incr_size = pb.lengthString(pb.ByteSize()) + 1
DeadlineExceededError
My questions are:
How can we avoid this Error?
What happens with the job, does it get retried (if so how can we control it?) or not ?
Does it causes data inconsistency in the end ?
Apparently you are doing too many puts than it is possible to insert in one datastore call. You have multiple options here:
If this is a relatively rare event - ignore it. Mapreduce will retry the slice and will lower put pool size. Make sure that your map is idempotent.
Take a look at http://code.google.com/p/appengine-mapreduce/source/browse/trunk/python/src/mapreduce/context.py - in your main.py you can lower DATASTORE_DEADLINE, MAX_ENTITY_COUNT or MAX_POOL_SIZE to lower the size of the pool for the whole mapreduce.
If you're using an InputReader, you might be able to adjust the default batch_size to reduce the number of entities processed by each task.
I believe the task queue will retry tasks, but you probably don't want it to, since it'll likley hit the same DeadlineExceededError.
Data inconsistencies are possible.
See this question as well.
App Engine - Task Queue Retry Count with Mapper API
New to python and appengine. Have got a little toy i've been playing with and ran into some script timeouts last night. I know you're capped at 10 seconds. Whats best practice for dealing with this?
edit
Sorry, should have been more clear. the URLFetch Timeout is the issue I am having. By Default it is set to 5 seconds, max is 10
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 636, in __call__
handler.post(*groups)
File "/base/data/home/apps/netlicense/3.349495357411133950/main.py", line 235, in post
graph.put_wall_post(message=body, attachment=attch, profile_id=self.request.get("fbid"))
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 149, in put_wall_post
return self.put_object(profile_id, "feed", message=message, **attachment)
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 131, in put_object
return self.request(parent_object + "/" + connection_name, post_args=data)
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 179, in request
file = urllib2.urlopen(urlpath, post_data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen
return _opener.open(url, data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 381, in open
response = self._open(req, data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 399, in _open
'_open', req)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain
result = func(*args)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1115, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1080, in do_open
r = h.getresponse()
File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 197, in getresponse
self._allow_truncated, self._follow_redirects)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 260, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 361, in _get_fetch_result
raise DeadlineExceededError(str(err))
DeadlineExceededError: ApplicationError: 5
You have not told us what your application does, so here are some generic suggestions:
You can trap the timeout exception with this exception class google.appengine.api.urlfetch.DownloadError and gently alert the users to retry.
Web request run time is 30 seconds max; if what you are trying to download is relatively small, you could probably trap the exception and resubmit (for just one time) the urlfetch inside the same Web request.
If working offline is not a problem for your app, you can move the Urlfetch call to a worker task served by a Task Queue; one of the advantage of using the taskqueue API is that App Engine automatically retries the Urlfetch task until it succeeds.