Google Appengine URLFetch Timeouts - Any Best Practices? - python

New to python and appengine. Have got a little toy i've been playing with and ran into some script timeouts last night. I know you're capped at 10 seconds. Whats best practice for dealing with this?
edit
Sorry, should have been more clear. the URLFetch Timeout is the issue I am having. By Default it is set to 5 seconds, max is 10
Traceback (most recent call last):
File "/base/python_runtime/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 636, in __call__
handler.post(*groups)
File "/base/data/home/apps/netlicense/3.349495357411133950/main.py", line 235, in post
graph.put_wall_post(message=body, attachment=attch, profile_id=self.request.get("fbid"))
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 149, in put_wall_post
return self.put_object(profile_id, "feed", message=message, **attachment)
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 131, in put_object
return self.request(parent_object + "/" + connection_name, post_args=data)
File "/base/data/home/apps/netlicense/3.349495357411133950/facebook.py", line 179, in request
file = urllib2.urlopen(urlpath, post_data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 124, in urlopen
return _opener.open(url, data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 381, in open
response = self._open(req, data)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 399, in _open
'_open', req)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 360, in _call_chain
result = func(*args)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1115, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/base/python_runtime/python_dist/lib/python2.5/urllib2.py", line 1080, in do_open
r = h.getresponse()
File "/base/python_runtime/python_dist/lib/python2.5/httplib.py", line 197, in getresponse
self._allow_truncated, self._follow_redirects)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 260, in fetch
return rpc.get_result()
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 592, in get_result
return self.__get_result_hook(self)
File "/base/python_runtime/python_lib/versions/1/google/appengine/api/urlfetch.py", line 361, in _get_fetch_result
raise DeadlineExceededError(str(err))
DeadlineExceededError: ApplicationError: 5

You have not told us what your application does, so here are some generic suggestions:
You can trap the timeout exception with this exception class google.appengine.api.urlfetch.DownloadError and gently alert the users to retry.
Web request run time is 30 seconds max; if what you are trying to download is relatively small, you could probably trap the exception and resubmit (for just one time) the urlfetch inside the same Web request.
If working offline is not a problem for your app, you can move the Urlfetch call to a worker task served by a Task Queue; one of the advantage of using the taskqueue API is that App Engine automatically retries the Urlfetch task until it succeeds.

Related

python ssl eof occurred in violation of protocol, wantwriteerror, zeroreturnerror

I'm running many celery tasks (20,000) using gevent for the pool (also monkey patching all). Each of these tasks hit 3rd party services like adwords to pull data.
I keep having tasks fail because of underlying SSL errors. Below are the stack-traces from a few of the exceptions (in no particular order, these are failures from separate tasks). I also get WantWriteError and ZeroReturnError occasionally but the EOF error seems to come up the most.
These errors happen while using different client libraries like googleads (suds library for soap communication) as well as requests and elasticsearch. I'm guessing some of these libraries use urllib3 while others use urllib2 etc.
There has been a lot of info on the EOF issue and forcing TLSv1 but I can't seem to find a resolution that works.
I'm not sure if I'm running too many requests at once, if somethings blocking or what; any help would be greatly appreciated, I'm pulling my hair out over this one.
Traceback (most recent call last):
...
File "/srv/reporting/src/reporting/stats/adwords/client.py", line 58, in _awql_report
downloader = self._get_client(client_id).GetReportDownloader(version=self.REPORT_DOWNLOADER_VERSION)
File "/usr/local/lib/python2.7/dist-packages/googleads/adwords.py", line 283, in GetReportDownloader
return ReportDownloader(self, version, server)
File "/usr/local/lib/python2.7/dist-packages/googleads/adwords.py", line 400, in __init__
proxy=proxy_option, cache=self._adwords_client.cache).wsdl.schema
File "/usr/local/lib/python2.7/dist-packages/suds/client.py", line 115, in __init__
self.wsdl = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 150, in open
d = self.fn(url, self.options)
File "/usr/local/lib/python2.7/dist-packages/suds/wsdl.py", line 136, in __init__
d = reader.open(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 74, in open
d = self.download(url)
File "/usr/local/lib/python2.7/dist-packages/suds/reader.py", line 92, in download
fp = self.options.transport.open(Request(url))
File "/usr/local/lib/python2.7/dist-packages/suds/transport/https.py", line 62, in open
return HttpTransport.open(self, request)
File "/usr/local/lib/python2.7/dist-packages/suds/transport/http.py", line 67, in open
return self.u2open(u2request)
File "/usr/local/lib/python2.7/dist-packages/suds/transport/http.py", line 132, in u2open
return url.open(u2request, timeout=tm)
File "/usr/lib/python2.7/urllib2.py", line 400, in open
response = self._open(req, data)
File "/usr/lib/python2.7/urllib2.py", line 418, in _open
'_open', req)
File "/usr/lib/python2.7/urllib2.py", line 378, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 1216, in https_open
return self.do_open(httplib.HTTPSConnection, req)
File "/usr/lib/python2.7/urllib2.py", line 1178, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol>
Traceback (most recent call last):
...
File "/srv/reporting/src/reporting/stats/analytics/client.py", line 57, in get_access_token
response = requests.post('https://accounts.google.com/o/oauth2/token', data)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 88, in post
return request('post', url, data=data, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 382, in send
raise SSLError(e, request=request)
SSLError: [Errno bad handshake] (-1, 'Unexpected EOF')
Traceback (most recent call last):
...
self.es.index(index=self.INDICE, doc_type=self.ROOT_CLASS.__name__, body=self.export(obj), id=obj.id)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/utils.py", line 68, in _wrapped
return func(*args, params=params, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/client/__init__.py", line 213, in index
_make_path(index, doc_type, id), params=params, body=body)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/transport.py", line 284, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/local/lib/python2.7/dist-packages/elasticsearch/connection/http_requests.py", line 44, in perform_request
response = self.session.request(method, url, data=body, timeout=timeout or self.timeout)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 327, in send
timeout=timeout
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py", line 493, in urlopen
body=body, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py", line 319, in _make_request
httplib_response = conn.getresponse(buffering=True)
File "/usr/lib/python2.7/httplib.py", line 1030, in getresponse
response.begin()
File "/usr/lib/python2.7/httplib.py", line 407, in begin
version, status, reason = self._read_status()
File "/usr/lib/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline()
File "/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py", line 273, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 995, in recv
self._raise_ssl_error(self._ssl, result)
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 851, in _raise_ssl_error
raise ZeroReturnError()
ZeroReturnError
So let's break this down by each traceback block. The first ends with:
File "/usr/lib/python2.7/urllib2.py", line 1178, in do_open
raise URLError(err)
URLError: <urlopen error [Errno 8] _ssl.c:504: EOF occurred in violation of protocol>
This is coming from urllib2. The fact that this receives an EOF makes me think that the server closed the connection while you were waiting for that "thread" to read from the socket again. You might want to use more time.sleep(0) to yield to gevent.
The second traceback comes from requests:
File "/usr/local/lib/python2.7/dist-packages/requests/adapters.py", line 382, in send
raise SSLError(e, request=request)
SSLError: [Errno bad handshake] (-1, 'Unexpected EOF')
The [Errno bad handshake] would make me tend to think this is a problem establishing the connection which could be caused by an unexpected EOF. Is that caused by using gevent? I'm uncertain.
The final traceback is definitely from requests as well but it also is coming out of PyOpenSSL and isn't being caught by urllib3 or requests.
File "/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py", line 851, in _raise_ssl_error
raise ZeroReturnError()
ZeroReturnError
I did some searching and found that "According to the pyOpenSSL docs ZeroReturnError means that the SSL connection has been closed cleanly." This says to me that the server again closed the connection because you took to long to read anything from the socket.
In short, I think you need to explicitly yield more often just to ensure that these socket problems don't arise. That's just a guess though, so take it with a grain of salt.

How do I increase the timeout for imaplib requests?

I'm using imaplib to query Gmail's IMAP, but some requests are taking more than 60 seconds to return. This is already done in a task, so I have a full 10 minutes to do the request, but my tasks are failing due to the 60 second limit on urlfetch.
I've tried setting urlfetch.set_default_fetch_deadline(600), but it doesn't seem to do anything.
Here's a stacktrace:
The API call remote_socket.Receive() took too long to respond and was cancelled.
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 760, in uid
typ, dat = self._simple_command(name, command, *args)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1070, in _simple_command
return self._command_complete(name, self._command(name, *args))
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 897, in _command_complete
typ, data = self._get_tagged_response(tag)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 999, in _get_tagged_response
self._get_response()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 916, in _get_response
resp = self._get_line()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1009, in _get_line
line = self.readline()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/imaplib.py", line 1171, in readline
return self.file.readline()
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/socket.py", line 445, in readline
data = self._sock.recv(self._rbufsize)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ssl.py", line 301, in recv
return self.read(buflen)
File "/base/data/home/runtimes/python27/python27_dist/lib/python2.7/ssl.py", line 220, in read
return self._sslobj.read(len)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/remote_socket/_remote_socket.py", line 864, in recv
return self.recvfrom(buffersize, flags)[0]
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/remote_socket/_remote_socket.py", line 903, in recvfrom
apiproxy_stub_map.MakeSyncCall('remote_socket', 'Receive', request, reply)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
DeadlineExceededError: The API call remote_socket.Receive() took too long to respond and was cancelled.
Which kind of DeadlineExceededError?
There are three kinds of DeadlineExceededError in AppEngine.
https://developers.google.com/appengine/articles/deadlineexceedederrors
google.appengine.runtime.DeadlineExceededError: raised if the overall request times out, typically after 60 seconds, or 10 minutes
for task queue requests.
google.appengine.runtime.apiproxy_errors.DeadlineExceededError: raised if an RPC exceeded its deadline. This is typically 5 seconds,
but it is settable for some APIs using the 'deadline' option.
google.appengine.api.urlfetch_errors.DeadlineExceededError: raised if the URLFetch times out.
As you can see. The 10mins limit of taskqueue only help thegoogle.appengine.runtime.DeadlineExceededError. The type of DeadlineExceededError can be identified via traceback (list below). In this case, it is google.appengine.runtime.apiproxy_errors.DeadlineExceededError. Which will raise in 5secs by default. (I will update the post after figure out how to change it)
TYPE 1. google.appengine.runtime.DeadlineExceededError
The traceback looks like
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 266, in Handle
result = handler(dict(self._environ), self._StartResponse)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__
rv = self.router.dispatch(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher
return route.handler_adapter(request, response)
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__
return handler.dispatch()
File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch
return method(*args, **kwargs)
File "/base/data/home/apps/s~tagtooadex2/test.371204033771063679/index.py", line 9, in get
pass
DeadlineExceededError
Solution
This exception can be solved by using taskqueue (10mins), backend or manual scaling options.
https://developers.google.com/appengine/docs/python/modules/#Python_Instance_scaling_and_class
TYPE 2. google.appengine.runtime.apiproxy_errors.DeadlineExceededError
The traceback looks like
DeadlineExceededError: The API call remote_socket.Receive() took too long to respond and was cancelled.
TYPE 3. google.appengine.api.urlfetch_errors.DeadlineExceededError
The traceback looks like
DeadlineExceededError: Deadline exceeded while waiting for HTTP response from URL: http://www.sogi.com.tw/newforum/article_list.aspx?topic_ID=6089521
Solution:
This exception can be solved by extend the deadline.
urlfetch.fetch(url, deadline=10*60)
https://developers.google.com/appengine/docs/python/urlfetch/fetchfunction
There's no mentioning of timeout in imaplib sources. So there are several options:
IMAP uses socket library to establish the connection, consider using
socket.setdefaulttimeout(timeoutValue) but if you do so be
aware of side-effects.
The second option is to create your own IMAP4 class child with
tunable timeout, shall we say in open function
From the Google App Engine documentation, it seems like there are many
possible causes for DeadlineExceededError.
In your case, it seems that it may be one of the last two (out of three) types of DeadlineExceededError presented on the page.

Moved Permanently error when running dev_appserver.py

I am trying the basic helloworld example (https://developers.google.com/appengine/docs/python/gettingstartedpython27/helloworld) and keep getting the HTTP Error 301: Moved Permanently error whenever I try to test my code using dev_appserver.py
The 2 files I have are copied and pasted exactly from the developers.google.com site.
I have included the location of where dev_appserver.py is in both PATH and PYTHONPATH
I am running this on Linux with python 2.7.3 and appengine v1.8.4
The output on the terminal when I run this is...
[verma#localhost python]$ dev_appserver.py helloworld/
WARNING 2013-09-11 04:45:49,988 api_server.py:327] Could not initialize images API; you are likely missing the Python "PIL" module.
INFO 2013-09-11 04:45:49,999 api_server.py:138] Starting API server at: http://localhost:57128
INFO 2013-09-11 04:45:50,021 dispatcher.py:164] Starting module "default" running at: http://localhost:8080
INFO 2013-09-11 04:45:50,023 admin_server.py:117] Starting admin server at: http://localhost:8000
HTTPError()
HTTPError()
Traceback (most recent call last):
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 256, in __call__
return app(environ, start_response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 97, in __call__
self._flush_logs(response.get('logs', []))
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 233, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "/usr/lib64/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 301: Moved Permanently
Traceback (most recent call last):
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 1302, in communicate
req.respond()
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 831, in respond
self.server.gateway(self).respond()
File "/home/verma/Documents/gae/google_appengine/lib/cherrypy/cherrypy/wsgiserver/wsgiserver2.py", line 2115, in respond
response = self.req.server.wsgi_app(self.env, self.start_response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/wsgi_server.py", line 256, in __call__
return app(environ, start_response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 311, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 97, in __call__
self._flush_logs(response.get('logs', []))
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/devappserver2/python/request_handler.py", line 233, in _flush_logs
apiproxy_stub_map.MakeSyncCall('logservice', 'Flush', request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 94, in MakeSyncCall
return stubmap.MakeSyncCall(service, call, request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_stub_map.py", line 328, in MakeSyncCall
rpc.CheckSuccess()
File "/home/verma/Documents/gae/google_appengine/google/appengine/api/apiproxy_rpc.py", line 156, in _WaitImpl
self.request, self.response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 200, in MakeSyncCall
self._MakeRealSyncCall(service, call, request, response)
File "/home/verma/Documents/gae/google_appengine/google/appengine/ext/remote_api/remote_api_stub.py", line 226, in _MakeRealSyncCall
encoded_response = self._server.Send(self._path, encoded_request)
File "/home/verma/Documents/gae/google_appengine/google/appengine/tools/appengine_rpc.py", line 393, in Send
f = self.opener.open(req)
File "/usr/lib64/python2.7/urllib2.py", line 410, in open
response = meth(req, response)
File "/usr/lib64/python2.7/urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib64/python2.7/urllib2.py", line 448, in error
return self._call_chain(*args)
File "/usr/lib64/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/usr/lib64/python2.7/urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
HTTPError: HTTP Error 301: Moved Permanently
INFO 2013-09-11 04:46:01,635 module.py:593] default: "GET / HTTP/1.1" 500 -
I have a feeling I am missing something very basic but can't find it. By the way this was working a few days back and I don't remember doing anything stupid to break this :-(
Quick Solution to Fix your Problem.
Open FILE [GAE Installation path]/google/appengine/tools/appengine_rpc.py"
Goto Line 578
Comment out the following line
# opener.add_handler(fancy_urllib.FancyProxyHandler())
It appears that dev_apserver needs to contact the google app engine servers on the internet every once in a while (even if you do not ask it to check for updates). The problem I had was that it cannot access the google servers when behind a proxy (even though I have properly exported http_proxy and https_proxy).
When I use a wired network directly connected to the internet I have no problems at all.
Once dev_appserver has 'called home', it will continue to work without issue behind a proxy for a few more days, and then I get this problem again.
In the past I have noticed similar behavior when connected on wifi as well.
It appears that dev_appserver works best when on a wired network without proxies.
This is based simply on my observations; it would be great if someone could give a definitive answer to this question.

Unable to connect to secure website using mechanize in Python

I'm trying to open a secure (https) website using mechanize library in Python. When I try to access the website, the server closes the connection and exception BadStatusLine is raised.
I have tried to modify the headers using the addheaders property, but no response.
import mechanize
br = mechanize.Browser()
print 'opening page ...'
resp = br.open('https://onlineservices.tin.nsdl.com/etaxnew/tdsnontds.jsp') #this one works fine
print 'ok'
print 'opening page 2 ...'
resp = br.open('https://incometaxindiaefiling.gov.in/portal/index.do') #exception raised
print 'ok'
Exception:
Traceback (most recent call last): File
pydev_imports.execfile(file, globals, locals) #execute the script File "Z:\pyTax\app_test.py", line 22, in
resp=br.open('https://incometaxindiaefiling.gov.in/portal/index.do')
File "build\bdist.win32\egg\mechanize_mechanize.py", line 203, in
open File "build\bdist.win32\egg\mechanize_mechanize.py", line 230,
in _mech_open File "build\bdist.win32\egg\mechanize_opener.py",
line 188, in open File "build\bdist.win32\egg\mechanize_http.py",
line 316, in http_request File
"build\bdist.win32\egg\mechanize_http.py", line 242, in read File
"build\bdist.win32\egg\mechanize_mechanize.py", line 203, in open
File "build\bdist.win32\egg\mechanize_mechanize.py", line 230, in
_mech_open File "build\bdist.win32\egg\mechanize_opener.py", line 193, in open File
"build\bdist.win32\egg\mechanize_urllib2_fork.py", line 344, in _open
File "build\bdist.win32\egg\mechanize_urllib2_fork.py", line 332, in
_call_chain File "build\bdist.win32\egg\mechanize_urllib2_fork.py", line 1170, in https_open File
"build\bdist.win32\egg\mechanize_urllib2_fork.py", line 1116, in
do_open File "D:\Python27\lib\httplib.py", line 1031, in getresponse
response.begin() File "D:\Python27\lib\httplib.py", line 407, in begin
version, status, reason = self._read_status() File "D:\Python27\lib\httplib.py", line 371, in _read_status
raise BadStatusLine(line) httplib.BadStatusLine: ''
httplib.BadStatusLineis s a subclass of HTTPException. Raised if a server responds with a HTTP status code that we don’t understand. That's whats causing your problem. I am not entirely sure about the fixup though, as your code works fine on my computer.

Program abruptly stops and throws URLERROR

from poster.encode import multipart_encode
from poster.streaminghttp import register_openers
def picscrazy(str,int):
register_openers()
datagen, headers = multipart_encode({"imagefile[]": open(str, "rb")})
request = urllib2.Request("http://www.picscrazy.com/process.php", datagen, headers)
Str is the filename and the int is just another flag.
The code is to upload a file to a image hosting website .I am using poster Poster for the post requests. The program stops after the request statement and gives an error .I cant understand the error whether its a problem in my network or in the program.
Below is the traceback of the error:
Traceback (most recent call last):
File "C:\Documents and Settings\Administrator\Desktop\for exbii\res.py", line 42, in <module>
picscrazy(fname,1)
File "C:\Documents and Settings\Administrator\Desktop\for exbii\res.py", line 14, in picscrazy
print(urllib2.urlopen(request).read())
File "C:\Python25\Lib\urllib2.py", line 121, in urlopen
return _opener.open(url, data)
File "C:\Python25\Lib\urllib2.py", line 374, in open
response = self._open(req, data)
File "C:\Python25\Lib\urllib2.py", line 392, in _open
'_open', req)
File "C:\Python25\Lib\urllib2.py", line 353, in _call_chain
result = func(*args)
File "C:\Python25\lib\poster\streaminghttp.py", line 142, in http_open
return self.do_open(StreamingHTTPConnection, req)
File "C:\Python25\Lib\urllib2.py", line 1076, in do_open
raise URLError(err)
URLError: <urlopen error (10054, 'Connection reset by peer')>
If you can't display the header coming back from the server, then, your server has simply cut you off.
It may be your request is bad -- but that's unlikely.
It may be that you've exceeded bandwidth restrictions.
It may be that your requests appear to be a DDOS attack because they're happening too frequently.

Categories