I'm getting some weird behavior -- when the application starts up a new instance for the first time, I get a DeadlineExceededError. When I hit refresh in the browser it works just fine And it doesn't matter which page I try. The strange thing is I can see all my debugging code just fine. In fact, I write to the log just prior to calling self.response and it shows up in the console's log. This is pretty hard to troubleshoot, since I'm not having any page load problems in the development environment, and the traceback is a bit opaque to me:
E 2013-09-29 00:10:03.975
Traceback (most recent call last):
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in Handle
for chunk in result:
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/appstats/recording.py", line 1286, in appstats_wsgi_wrapper
end_recording(status)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/appstats/recording.py", line 1410, in end_recording
rec.save()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/appstats/recording.py", line 654, in save
key, len_part, len_full = self._save()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/ext/appstats/recording.py", line 678, in _save
namespace=config.KEY_NAMESPACE)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 1008, in set_multi
namespace=namespace)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 907, in _set_multi_with_policy
status_dict = rpc.get_result()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 612, in get_result
return self.__get_result_hook(self)
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/memcache/__init__.py", line 974, in __set_with_policy_hook
rpc.check_success()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 578, in check_success
self.__rpc.CheckSuccess()
File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/api/apiproxy_rpc.py", line 133, in CheckSuccess
raise self.exception
DeadlineExceededError
I 2013-09-29 00:10:03.988
This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application.
I'm not sure how to even go about debugging this, since the error seems to be after all my code has already run.
Edit: I should add this:
I 2013-09-29 00:09:06.919
DEBUG: Writing output!
E 2013-09-29 00:10:03.975
You can see there's nearly a full minute between logging "Writing output!" just before self.response is called, and when the error occurs.
Deadlineexceedederror happens in app engine, if any request to a frontend instance does not get a response within 60 seconds. So what is happening in your case must be that when there are no running instance and your app receives a new user request, a new instance is started for processing. This will lead to an overall response time = instance startup time like library loading and initial data access + the time for processing the user request and this causes a deadlineexceeded error. Then when you are accessing your app immediately , there is an already running instance and so response time = the time for processing the user request and you do not get any error.
Please check the suggested approaches for handling deadlineexceedederror including warmup requests, which is like keeping an instance ready before arrival of a live user request.
Related
in my flask app
#app.route("/profile", methods=["GET"])
#login_required
def profile():
return render_template("profile.html",image_relative_path = session["profile_pfp"])
adding session["profile_pfp"] made my app prone to crashing
crashes only happened when I was reloading /profile too much
I was also suggested do make changes in my sessions.py , will thic fix my problems ? to better fid flask 2.4 and beyond (I dont think this will fix my problems with session)
how was I suppose to know that using session[] can be this much unstable I wasted so much time trying to figure out why this was Happening
how can I fix this ?
here are some of the errors
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 1844, in finalize_request
response = self.process_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/flask/app.py", line 2340, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/usr/local/lib/python3.11/site-packages/flask_session/sessions.py", line 353, in save_session
if session.modified:
^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'modified'
```the simpler the page , the harder it is to get this error but when the page is more complex the more noticeable this problem is
so just to repeat The session variable is None. does not happen consistently only when
I repeatedly keep reloading my page 100 times a second
I have a Django app being served with nginx+gunicorn with 3 gunicorn worker processes. Occasionally (maybe once every 100 requests or so) one of the worker processes gets into a state where it starts failing most (but not all) requests that it serves, and then it throws an exception when it tries to email me about it. The gunicorn error logs look like this:
[2015-04-29 10:41:39 +0000] [20833] [ERROR] Error handling request
Traceback (most recent call last):
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 130, in handle
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/gunicorn/workers/sync.py", line 171, in handle_request
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/wsgi.py", line 206, in __call__
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 196, in get_response
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 226, in handle_uncaught_exception
File "/usr/lib/python2.7/logging/__init__.py", line 1178, in error
File "/usr/lib/python2.7/logging/__init__.py", line 1271, in _log
File "/usr/lib/python2.7/logging/__init__.py", line 1281, in handle
File "/usr/lib/python2.7/logging/__init__.py", line 1321, in callHandlers
File "/usr/lib/python2.7/logging/__init__.py", line 749, in handle
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/log.py", line 122, in emit
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/log.py", line 125, in connection
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/core/mail/__init__.py", line 29, in get_connection
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 26, in import_by_path
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/module_loading.py", line 21, in import_by_path
File "/home/django/virtualenvs/homestead_django/local/lib/python2.7/site-packages/django/utils/importlib.py", line 40, in import_module
ImproperlyConfigured: Error importing module django.core.mail.backends.smtp: "No module named smtp"
So some uncaught exception is happening and then Django is trying to email me about it. The fact that it can't import django.core.mail.backends.smtp doesn't make sense because django.core.mail.backends.smtp should definitely be on the worker process' Python path. I can import it just fine from a manage.py shell and I do get emails for other server errors (actual software bugs) so I know that works. It's like the the worker process' environment is corrupted somehow.
Once a worker process enters this state it has a really hard time recovering; almost every request it serves ends up failing in this same manner. If I restart gunicorn everything is good (until another worker process falls into this weird state again).
I don't notice any obvious patterns so I don't think this is being triggered by a bug in my app (the URLs error'ing out are different, etc). It seems like some sort of race condition.
Currently I'm using gunicorn's --max-requests option to mitigate this problem but I'd like to understand what's going on here. Is this a race condition? How can I debug this?
I suggest you use Sentry which gives a smart way of handling errors.
You can use it as a cloud based solution (getsentry) or you can install it on your own server (github).
Before I was using django core log mailer now I always use sentry.
I do not work at Sentry but their solution is pretty awesome !
We discovered one particular view that was pegging the CPU for a few seconds every time it was loaded that seemed to be triggering this issue. I still don't understand how slamming a gunicorn worker could result in a corrupted execution environment, but fixing the high-CPU view seems to have gotten rid of this issue.
I have a Python script that takes advantage of the latest Vimeo API (https://developer.vimeo.com/api/) to upload some videos to my Vimeo account.
Here is what, in a slightly simplified form, the script basically does:
from vimeo import VimeoClient
vimeo = VimeoClient('my_token_here')
uid = vimeo.upload('/path/to/file.mov')
When file.mov is 3MB or less everything works fine and the file is successfully uploaded. However, for larger files I get a timeout error:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/fabio/.virtualenvs/venv/src/vimeo/vimeo/uploads.py", line 79, in __call__
return do_upload()
File "/home/fabio/.virtualenvs/venv/src/vimeo/vimeo/uploads.py", line 70, in do_upload
self.upload_segment(upload_uri, _range, video_data, filetype or 'mp4')
File "/home/fabio/.virtualenvs/venv/src/vimeo/vimeo/uploads.py", line 135, in upload_segment
body=data, headers=request_headers)
File "/home/fabio/.virtualenvs/venv/lib/python2.7/site-packages/tornado/httpclient.py", line 85, in fetch
self._async_client.fetch, request, **kwargs))
File "/home/fabio/.virtualenvs/venv/lib/python2.7/site-packages/tornado/ioloop.py", line 389, in run_sync
return future_cell[0].result()
File "/home/fabio/.virtualenvs/venv/lib/python2.7/site-packages/tornado/concurrent.py", line 131, in result
return super(TracebackFuture, self).result(timeout=timeout)
File "/home/fabio/.virtualenvs/venv/lib/python2.7/site-packages/tornado/concurrent.py", line 65, in result
raise self._exception
HTTPError: HTTP 599: Timeout
This is the vimeo library I am using: https://github.com/vimeo/vimeo.py.
And the Tornado library in my virtual environment is updated to the 3.2.1 version.
Any tips for me?
From the Tornado source, the default request timeout for an HTTPClient which vimeo is using is 20 seconds. It looks like the Vimeo library attempts to upload as much of the video as possible, and then queries the server to see how much was successfully uploaded. It is likely that it is taking over 20 seconds to upload your video and as a result timing out. I'm not convinced they handle this properly though, as you get a timeout error from Tornado, but it seems like they want to support the whole file not being uploaded at once.
You could try modifying the vimeo library code that I linked above to have a much longer timeout by changing the linked line in your local copy to something like:
r = HTTPClient().fetch(upload_uri, method="PUT",
body=data, headers=request_headers,
request_timeout=9999.0)
If that doesn't work you could try raising an issue on their github issues tracker, and someone who actually works on the project might be able to help you further.
I have a python object whose methods are currently being exposed via XML-RPC using the standard xmlrpc.server.SimpleXMLRPCServer (with the ThreadingMixIn, but that should not be relevant).
The server is running on Win64 as are the clients. Some RPC methods return tables of information from a database to the client. I'm finding that even modest blocks of data are overwhelming the OS and I get this kind of error:
Traceback (most recent call last):
File "C:\Python32\lib\wsgiref\handlers.py", line 137, in run
self.result = application(self.environ, self.start_response)
File "U:\Me\src\application\my_xmlrpc_client.py", line 1510, in __call__
body = method(environ, start_response)
File "U:\Me\src\application\my_xmlrpc_client.py", line 305, in q_root
rows = proxy.return_table()
File "C:\Python32\lib\xmlrpc\client.py", line 1095, in __call__
return self.__send(self.__name, args)
File "C:\Python32\lib\xmlrpc\client.py", line 1423, in __request
verbose=self.__verbose
File "C:\Python32\lib\xmlrpc\client.py", line 1136, in request
return self.single_request(host, handler, request_body, verbose)
File "C:\Python32\lib\xmlrpc\client.py", line 1151, in single_request
return self.parse_response(resp)
File "C:\Python32\lib\xmlrpc\client.py", line 1323, in parse_response
return u.close()
File "C:\Python32\lib\xmlrpc\client.py", line 667, in close
raise Fault(**self._stack[0])
xmlrpc.client.Fault: :[Errno 12] Not enough space">
LIBRA.rsvfx.com - - [19/May/2011 15:58:09] "GET / HTTP/1.1" 500 59
Some research into the Errno 12 problem reveals that there's some issue with the underlying MS OS call and not with python itself:
http://bugs.python.org/issue11395
I'm not a very experienced XML-RPC developer; but is there some standard convention I should follow for delivering large payloads which would result in more, smaller writes (as opposed to fewer, large writes)?
And please remember I'm asking about buffer overruns; I don't want to have to debate why I'm using XML-RPC rather than rolling my own RESTful interface... I had to patch my WSGI application for this problem - sending small 1k blocks rather than larger blocks. I'm not sure how to patch the XML-RPC application.
-- edit --
As requested, here is a code sample that reproduces the problem:
import xmlrpc.server
class RPCApp :
def get_page(self):
return ["data" * 64 for i in range(0,1024)]
if __name__ == '__main__' : # important to use this block, for processes to spawn correctly
server = xmlrpc.server.SimpleXMLRPCServer(('127.0.0.1',8989), allow_none=True, logRequests=False)
server.register_instance(RPCApp())
server.serve_forever()
And the client code:
import xmlrpc.client
proxy = xmlrpc.client.ServerProxy('http://127.0.0.1:8989', allow_none=True)
print(proxy.get_page())
If you manipulate the page in the server to be small, then the code works. As it is, the exception is thrown.
-- edit --
Seems to be resolved in python 3.2.1rc1. Looks like we'll have to upgrade our installation....
I've used two different python oauth libraries with Django to authenticate with twitter. The setup is on apache with WSGI. When I restart the server everything works great for about 10 minutes and then the httplib seems to lock up (see the following error).
I'm running only 1 process and 1 thread of WSGI but that seems to make no difference.
I cannot figure out why it's locking up and giving this CannotSendRequest error. I've spent a lot of hours on this frustrating problem. Any hints/suggestions of what it could be would be greatly appreciated.
File "/usr/lib/python2.5/site-packages/django/core/handlers/base.py", line 92, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "mypath/auth/decorators.py", line 9, in decorated
return f(*args, **kwargs)
File "mypath/auth/views.py", line 30, in login
token = get_unauthorized_token()
File "/root/storm/eye/auth/utils.py", line 49, in get_unauthorized_token
return oauth.OAuthToken.from_string(oauth_response(req))
File "mypath/auth/utils.py", line 41, in oauth_response
connection().request(req.http_method, req.to_url())
File "/usr/lib/python2.5/httplib.py", line 866, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.5/httplib.py", line 883, in _send_request
self.putrequest(method, url, **skips)
File "/usr/lib/python2.5/httplib.py", line 770, in putrequest
raise CannotSendRequest()
CannotSendRequest
This exception is raised when you reuse httplib.HTTP object for new request while you havn't called its getresponse() method for previous request. Probably there was some other error before this one that left connection in broken state. The simplest reliable way to fix the problem is creating new connection for each request, not reusing it. Sure, it will be a bit slower, but I think it's not an issue having you are running application in single process and single thread.
Also check your Python version. I had a similar situation after updating to Py-2.7 from Py-2.6. In Py-2.6 everything worked without any problems. Py-2.7 httplib uses HTTP/1.1 by default which made the server did not send back the Connection: close option in the respond, therefore the connection handling was broken. In my case this worked with HTTP/1.0 though.
http.client.CannotSendRequest: Request-sent
while using http.client module HTTPConnection class ran into the error caus my host name was incorrect