Exceptions in HTTP(WSGI)+JsonDocument Spyne Client - python

I've programmed a server-side service using Spyne. I want to use the Spyne client code, but I can't do it without having exceptions.
The server side code is something like (I've removed the imports and unified files):
class NotificationsRPC(ServiceBase):
#rpc(Uuid, DateTime, _returns=Integer)
def new_player(ctx, player_uuid, birthday):
# A lot of irrelevant code
return 0
#rpc(Uuid, _returns=Integer)
def deleted_player(ctx, player_uuid):
# A lot of irrelevant code
return 0
# Many other similar methods
radiante_app = Application(
[NotificationsRPC],
tns="radiante.rpc",
in_protocol=JsonDocument(validator="soft"),
out_protocol=JsonDocument()
)
wsgi_app = WsgiApplication(radiante_app)
server = make_server('127.0.0.1', 27182, wsgi_app)
server.serve_forever()
This code runs properly, I can make requests to it through CURL (the real code is implemented using uWSGI, but in this example I'm using the python embedded WSGI server).
The problem appears in the client-side code. It's something like (RadianteRPC is the same class as in server-side, but with pass in the methods body:
radiante_app = Application(
[RadianteRPC],
tns="radiante.rpc",
in_protocol=JsonDocument(validator="soft"),
out_protocol=JsonDocument()
)
rad_client = HttpClient("http://127.0.0.1:27182", radiante_app)
# In the real code, the parameters have more sense.
rad_client.service.new_player(uuid.UUID(), datetime.utcnow())
Then, when the code is executed I have the following error:
File "/vagrant/apps/radsync/signal_hooks/player.py", line 74, in _player_post_save
created
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/client/http.py", line 64, in __call__
self.get_in_object(self.ctx)
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/client/_base.py", line 144, in get_in_object
message=self.app.in_protocol.RESPONSE)
File "/home/vagrant/devenv/local/lib/python2.7/site-packages/spyne/protocol/dictdoc.py", line 278, in decompose_incoming_envelope
raise ValidationError("Need a dictionary with exactly one key "
ValidationError: Fault(Client.ValidationError: 'The value "\'Need a dictionary with exactly one key as method name.\'" could not be validated.')
It's worth noting that the client is implemented in Django U_u (not my decision), but I think it hasn't relation with the problem.
I've followed some indications from this question (adapting the example of ZeroMQ transport protocol to HTTP transport protocol): There is an example of Spyne client?
Thank you for your attention.

Related

Request timed out: timeout('timed out') in Python's HTTPServer

I am trying to create a simple HTTP server that uses the Python HTTPServer which inherits BaseHTTPServer. [https://github.com/python/cpython/blob/main/Lib/http/server.py][1]
There are numerous examples of this approach online and I don't believe I am doing anything unusual.
I am simply importing the class via:
"from http.server import HTTPServer, BaseHTTPRequestHandler"
in my code.
My code overrides the do_GET() method to parse the path variable to determine what page to show.
However, if I start this server and connect to it locally (ex: http://127.0.0.1:50000) the first page loads fine. If I navigate to another page (via my first page links) that too works fine, however, on occasion (and this is somewhat sporadic), there is a delay and the server log shows a Request timed out: timeout('timed out') error. I have tracked this down to the handle_one_request method in the BaseHTTPServer class:
def handle_one_request(self):
"""Handle a single HTTP request.
You normally don't need to override this method; see the class
__doc__ string for information on how to handle specific HTTP
commands such as GET and POST.
"""
try:
self.raw_requestline = self.rfile.readline(65537)
if len(self.raw_requestline) > 65536:
self.requestline = ''
self.request_version = ''
self.command = ''
self.send_error(HTTPStatus.REQUEST_URI_TOO_LONG)
return
if not self.raw_requestline:
self.close_connection = True
return
if not self.parse_request():
# An error code has been sent, just exit
return
mname = 'do_' + self.command ## the name of the method is created
if not hasattr(self, mname): ## checking that we have that method defined
self.send_error(
HTTPStatus.NOT_IMPLEMENTED,
"Unsupported method (%r)" % self.command)
return
method = getattr(self, mname) ## getting that method
method() ## finally calling it
self.wfile.flush() #actually send the response if not already done.
except socket.timeout as e:
# a read or a write timed out. Discard this connection
self.log_error("Request timed out: %r", e)
self.close_connection = True
return
You can see where the exception is thrown in the "except socket.timeout as e:" clause.
I have tried overriding this method by including it in my code but it is not clear what is causing the error so I run into dead ends. I've tried creating very basic HTML pages to see if there was something in the page itself, but even "blank" pages cause the same sporadic issue.
What's odd is that sometimes a page loads instantly, and almost randomly, it will then timeout. Sometimes the same page, sometimes a different page.
I've played with the http.timeout setting, but it makes no difference. I suspect it's some underlying socket issue, but am unable to diagnose it further.
This is on a Mac running Big Sur 11.3.1, with Python version 3.9.4.
Any ideas on what might be causing this timeout, and in particular any suggestions on a resolution. Any pointers would be appreciated.
After further investigation, this particular appears to be an issue with Safari. Running the exact same code and using Firefox does not show the same issue.

gRPC-python: switching Servicer while gRPC server is running ? (simulation - real mode switching)

For our new, open lab device automation standard (https://gitlab.com/SiLA2/sila_python) we would like to run the devices (=gRPC servers) in two modes: a simulation mode and a real mode (with the same set of remote calls, but in the first case it shall just return simulated responses in the second it should communicate with the hardware.
My first idea was to create two almost identical gRPC servicer python classes in separated modules, like in this example:
in hello_sim.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the simulation servicer
def SayHello(self, request, context):
# simulation code ...
return SayHello_pb2.SayHello_response("simulation")
in hello_real.py:
class SayHello(SayHello_pb2_grpc.SayHelloServicer):
#... implementation of the real servicer
def SayHello(self, request, context):
# real hardware code
return SayHello_pb2.SayHello_response("real")
and then, after creating the gRPC server in server.py I could switch between simulation and real mode by re-registration of the servcier at the gRPC server like, e.g.:
server.py
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_sim, grpc_server)
grpc_server.run()
# ..... and later, still while the same grpc server is running, re-register, like
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_real, grpc_server)
to be able to call the real hardware code;
or by exchanging the reference to the servicer object, like, e.g.:
# imports, init ...
grpc_server = GRPCServer(ThreadPoolExecutor(max_workers=10))
sh_sim = SayHello_sim.SayHello()
sh_real = SayHello_real.SayHello()
sh_current = sh_sim
SayHello_pb2_grpc.add_SayHelloServicer_to_server(sh_current , grpc_server)
grpc_server.run()
# ..... and then later, still while the same grpc server is running, re-register the real Servicer, like
sh_current = sh_real
# so that the server would just use the other servicer object for the next calls ...
but both strategies are not working :(
When calling the server in simulation mode from a gRPC client, I would expect that it should reply (according to the example): "simulation"
gRPC_client.py
# imports, init ....
response = self.SayHello_stub.SayHello()
print(response)
>'simulation'
and after switching to real-mode (by any mechanism) "real":
# after switching to real mode ...
response = self.SayHello_stub.SayHello()
print(response)
>'real'
What is the most clean and elegant solution to achieve this mode switching without completely shutting down the gRPC server (and by that loosing the connection to the client) ?
Thanks a lot for your help in advance !
PS:(Shutting down the gRPC server and re-registering would of course work, but this is not, what we want.)
A kind colleague of mine provided me with a good suggestion:
One could use the dependency injection concept (s. https://en.wikipedia.org/wiki/Dependency_injection), like:
class SayHello(pb2_grpc.SayHelloServicer):
def inject_implementation(SayHelloImplmentation):
self.implementation = SayHelloImplementation;
def SayHello(self, request, context):
return self.implementation.sayHello(request, context)
for a full example, see
https://gitlab.com/SiLA2/sila_python/tree/master/examples/simulation_real_mode_demo

Setting (mocking) request headers for Flask app unit test

Does anyone know of a way to set (mock) the User-Agent of the request object provided by FLask (Werkzeug) during unit testing?
As it currently stands, when I attempt to obtain details such as the request.headers['User-Agent'] a KeyError is raised as the Flask test_client() doesn't set these up. (See partial stack trace below)
When attempting to get the User-Agent from the request object in a Flask project during unit testing, a KeyError is raised.
File "/Users/me/app/rest/app.py", line 515, in login
if request.headers['User-Agent']:
File "/Users/me/.virtualenvs/app/lib/python2.7/site-packages/werkzeug/datastructures.py", line 1229, in __getitem__
return self.environ['HTTP_' + key]
KeyError: 'HTTP_USER_AGENT'
-- UPDATE --
Along with the (accepted) solution below, the environ_base hint lead me to this other SO solution. The premise of this solution is to create a wrapper class for the Flask app and override the call method to automatically set the environment variables. This way, the variables are set for all calls. So, the solution I ended up implementing is creating this proxy class:
class FlaskTestClientProxy(object):
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
environ['REMOTE_ADDR'] = environ.get('REMOTE_ADDR', '127.0.0.1')
environ['HTTP_USER_AGENT'] = environ.get('HTTP_USER_AGENT', 'Chrome')
return self.app(environ, start_response)
And then wrapping the WSGI container with that proxy:
app.wsgi_app = FlaskTestClientProxy(app.wsgi_app)
test_client = app.test_client()
You need to pass in environ_base when you call get() or post(). E.g.,
client = app.test_client()
response = client.get('/your/url/',
environ_base={'HTTP_USER_AGENT': 'Chrome, etc'})
Then your request.user_agent should be whatever you pass in, and you can access it via request.headers['User-Agent'].
See http://werkzeug.pocoo.org/docs/test/#testing-api for more info.
Even though what #chris-mckinnel wrote works, I wouldn't opt to use environ_base in this case.
You can easily do something as shown below:
with app.test_request_context('url', headers={'User-Agent': 'Your value'}):
pass
This should do the job and at the same time it keeps your code explicit - explicit is better than implicit.
If you want to know what you can pass to the test_request_context reference the EnvironBuilder definition; which can be found here.

Thrift async response is crossed with wrong service method

I came across with strange issue while using Thrift server in TThreadedServer mode. My test client makes parallel calls (100 calls) to the server. Say two methods in my thrift service
are load_account() and getRequestQueueCoun(). These methods have async calls with send_load_account and recv_load_account(). send_getRequestQueueCoun() and recv_getRequestQueueCoun().
The problem I am facing is response of send_getRequestQueueCoun() call is being captured in recv_load_account().
I found the response in the following line
def recv_load_account(self, ):
(fname, mtype, rseqid) = self._iprot.readMessageBegin() # here fname is the other method.
Server initialization Code -
handler = SyncServiceHandler(settings.SERVER_NAME,settings.SERVER_LISTEN_IP,settings.SERVER_LISTEN_PORT,isDispatcher)
transport = TSocket.TServerSocket(settings.SERVER_LISTEN_IP, settings.SERVER_LISTEN_PORT)
processor = SyncService.Processor(handler)
tfactory = TTransport.TBufferedTransportFactory()
pfactory = TBinaryProtocol.TBinaryProtocolFactory()
server = TServer.TThreadedServer(processor, transport, tfactory, pfactory)
I am running two thrift instance in localhost on different ports.
I am using Thrift with Python 2.7.
I tried my best to quickly draft my problem. If it's still not clear, let me know if I can elaborate it.
Thanks in Advance.
Panakj

Django + FastCGI - randomly raising OperationalError

I'm running a Django application. Had it under Apache + mod_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (neither the place nor the time where it appears seem to be predictable). Since it's random, and it appears only after switching to FastCGI, I assume it has something to do with some settings.
Found a few results when googleing, but they seem to be related to setting maxrequests=1. However, I use the default, which is 0.
Any ideas where to look for?
PS. I'm using PostgreSQL. Might be related to that as well, since the exception appears when making a database query.
File "/usr/lib/python2.6/site-packages/django/core/handlers/base.py", line 86, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 140, in root
if not self.has_permission(request):
File "/usr/lib/python2.6/site-packages/django/contrib/admin/sites.py", line 99, in has_permission
return request.user.is_authenticated() and request.user.is_staff
File "/usr/lib/python2.6/site-packages/django/contrib/auth/middleware.py", line 5, in __get__
request._cached_user = get_user(request)
File "/usr/lib/python2.6/site-packages/django/contrib/auth/__init__.py", line 83, in get_user
user_id = request.session[SESSION_KEY]
File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 46, in __getitem__
return self._session[key]
File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/base.py", line 172, in _get_session
self._session_cache = self.load()
File "/usr/lib/python2.6/site-packages/django/contrib/sessions/backends/db.py", line 16, in load
expire_date__gt=datetime.datetime.now()
File "/usr/lib/python2.6/site-packages/django/db/models/manager.py", line 93, in get
return self.get_query_set().get(*args, **kwargs)
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 304, in get
num = len(clone)
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 160, in __len__
self._result_cache = list(self.iterator())
File "/usr/lib/python2.6/site-packages/django/db/models/query.py", line 275, in iterator
for row in self.query.results_iter():
File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 206, in results_iter
for rows in self.execute_sql(MULTI):
File "/usr/lib/python2.6/site-packages/django/db/models/sql/query.py", line 1734, in execute_sql
cursor.execute(sql, params)
OperationalError: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Possible solution: http://groups.google.com/group/django-users/browse_thread/thread/2c7421cdb9b99e48
Until recently I was curious to test
this on Django 1.1.1. Will this
exception be thrown again... surprise,
there it was again. It took me some
time to debug this, helpful hint was
that it only shows when (pre)forking.
So for those who getting randomly
those exceptions, I can say... fix
your code :) Ok.. seriously, there
are always few ways of doing this, so
let me firs explain where is a
problem first. If you access database
when any of your modules will import
as, e.g. reading configuration from
database then you will get this error.
When your fastcgi-prefork application
starts, first it imports all modules,
and only after this forks children.
If you have established db connection
during import all children processes
will have an exact copy of that
object. This connection is being
closed at the end of request phase
(request_finished signal). So first
child which will be called to process
your request, will close this
connection. But what will happen to
the rest of the child processes? They
will believe that they have open and
presumably working connection to the
db, so any db operation will cause an
exception. Why this is not showing in
threaded execution model? I suppose
because threads are using same object
and know when any other thread is
closing connection. How to fix this?
Best way is to fix your code... but
this can be difficult sometimes.
Other option, in my opinion quite
clean, is to write somewhere in your
application small piece of code:
from django.db import connection
from django.core import signals
def close_connection(**kwargs):
connection.close()
signals.request_started.connect(close_connection)
Not ideal thought, connecting twice to the DB is a workaround at best.
Possible solution: using connection pooling (pgpool, pgbouncer), so you have DB connections pooled and stable, and handed fast to your FCGI daemons.
The problem is that this triggers another bug, psycopg2 raising an InterfaceError because it's trying to disconnect twice (pgbouncer already handled this).
Now the culprit is Django signal request_finished triggering connection.close(), and failing loud even if it was already disconnected. I don't think this behavior is desired, as if the request already finished, we don't care about the DB connection anymore. A patch for correcting this should be simple.
The relevant traceback:
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/core/handlers/wsgi.py in __call__(self=<django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, environ={'AUTH_TYPE': 'Basic', 'DOCUMENT_ROOT': '/storage/test', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTPS': 'off', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_AUTHORIZATION': 'Basic dGVzdGU6c3VjZXNzbw==', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': '__utma=175602209.1371964931.1269354495.126938948...none); sessionid=a1990f0d8d32c78a285489586c510e8c', 'HTTP_HOST': 'www.rede-colibri.com', ...}, start_response=<function start_response at 0x24f87d0>)
246 response = self.apply_response_fixes(request, response)
247 finally:
248 signals.request_finished.send(sender=self.__class__)
249
250 try:
global signals = <module 'django.core.signals' from '/usr/local/l.../Django-1.1.1-py2.6.egg/django/core/signals.pyc'>, signals.request_finished = <django.dispatch.dispatcher.Signal object at 0x1975710>, signals.request_finished.send = <bound method Signal.send of <django.dispatch.dispatcher.Signal object at 0x1975710>>, sender undefined, self = <django.core.handlers.wsgi.WSGIHandler object at 0x24fb210>, self.__class__ = <class 'django.core.handlers.wsgi.WSGIHandler'>
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/dispatch/dispatcher.py in send(self=<django.dispatch.dispatcher.Signal object at 0x1975710>, sender=<class 'django.core.handlers.wsgi.WSGIHandler'>, **named={})
164
165 for receiver in self._live_receivers(_make_id(sender)):
166 response = receiver(signal=self, sender=sender, **named)
167 responses.append((receiver, response))
168 return responses
response undefined, receiver = <function close_connection at 0x197b050>, signal undefined, self = <django.dispatch.dispatcher.Signal object at 0x1975710>, sender = <class 'django.core.handlers.wsgi.WSGIHandler'>, named = {}
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py in close_connection(**kwargs={'sender': <class 'django.core.handlers.wsgi.WSGIHandler'>, 'signal': <django.dispatch.dispatcher.Signal object at 0x1975710>})
63 # when a Django request is finished.
64 def close_connection(**kwargs):
65 connection.close()
66 signals.request_finished.connect(close_connection)
67
global connection = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, connection.close = <bound method DatabaseWrapper.close of <django.d...ycopg2.base.DatabaseWrapper object at 0x17b14c8>>
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py in close(self=<django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>)
74 def close(self):
75 if self.connection is not None:
76 self.connection.close()
77 self.connection = None
78
self = <django.db.backends.postgresql_psycopg2.base.DatabaseWrapper object at 0x17b14c8>, self.connection = <connection object at 0x1f80870; dsn: 'dbname=co...st=127.0.0.1 port=6432 user=postgres', closed: 2>, self.connection.close = <built-in method close of psycopg2._psycopg.connection object at 0x1f80870>
Exception handling here could add more leniency:
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py
63 # when a Django request is finished.
64 def close_connection(**kwargs):
65 connection.close()
66 signals.request_finished.connect(close_connection)
Or it could be handled better on psycopg2, so to not throw fatal errors if all we're trying to do is disconnect and it already is:
/usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py
74 def close(self):
75 if self.connection is not None:
76 self.connection.close()
77 self.connection = None
Other than that, I'm short on ideas.
In the switch, did you change PostgreSQL client/server versions?
I have seen similar problems with php+mysql, and the culprit was an incompatibility between the client/server versions (even though they had the same major version!)
Smells like a possible threading problem. Django is not guaranteed thread-safe although the in-file docs seem to indicate that Django/FCGI can be run that way. Try running with prefork and then beat the crap out of the server. If the problem goes away ...
Maybe the PYTHONPATH and PATH environment variable is different for both setups (Apache+mod_python and lighttpd + FastCGI).
In the end I switched back to Apache + mod_python (I was having other random errors with fcgi, besides this one) and everything is good and stable now.
The question still remains open. In case anybody has this problem in the future and solves it they can record the solution here for future reference. :)
I fixed a similar issue when using a geodjango model that was not using the default ORM for one of its functions. When I added a line to manually close the connection the error went away.
http://code.djangoproject.com/ticket/9437
I still see the error randomly (~50% of requests) when doing stuff with user login/sessions however.
I went through the same problem recently (lighttpd, fastcgi & postgre). Searched for a solution for days without success, and as a last resort switched to mysql. The problem is gone.
Why not storing session in cache?
Set
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
Also you can try use postgres with pgbouncer (postgres - prefork server and don't like many connects/disconnects per time), but firstly check your postgresql.log.
Another version - you have many records in session tables and django-admin.py cleanup can help.
The problem could be mainly with Imports. Atleast thats what happened to me.
I wrote my own solution after finding nothing from the web. Please check my blogpost here: Simple Python Utility to check all Imports in your project
Ofcourse this will only help you to get to the solution of the original issue pretty quickly and not the actual solution for your problem by itself.
Change from method=prefork to method=threaded solved the problem for me.
I try to give an answer to this even if I'am not using django but pyramid as the framework. I was running into this problem since a long time. Problem was, that it was really difficult to produce this error for tests... Anyway. Finally I solved it by digging through the whole stuff of sessions, scoped sessions, instances of sessions, engines and connections etc. I found this:
http://docs.sqlalchemy.org/en/rel_0_7/core/pooling.html#disconnect-handling-pessimistic
This approach simply adds a listener to the connection pool of the engine. In the listener a static select is queried to the database. If it fails the pool try to establish a new connection to the database before it fails at all. Important: This happens before any other stuff is thrown to the database. So it is possible to pre check connection what prevents the rest of your code from failing.
This is not a clean solution since it don't solve the error itself but it works like a charm. Hope this helps someone.
An applicable quote:
"2019 anyone?"
- half of YouTube comments, circa 2019
If anyone is still dealing with this, make sure your app is "eagerly forking" such that your Python DB driver (psycopg2 for me) isn't sharing resources between processes.
I solved this issue on uWSGI by adding the lazy-apps = true option, which causes is to fork app processes right out of the gate, rather than waiting for copy-on-write. I imagine other WSGI / FastCGI hosts have similar options.
Have you considered downgrading to Python 2.5.x (2.5.4 specifically)? I don't think Django would be considered mature on Python 2.6 since there are some backwards incompatible changes. However, I doubt this will fix your problem.
Also, Django 1.0.2 fixed some nefarious little bugs so make sure you're running that. This very well could fix your problem.

Categories