Can you hook pyramid into twisted, skipping the wsgi part? - python

Given that :
WSGI doesn't play very well with async.
Twisted ergonomics suck.
Pyramid is very clean and component oriented.
How could I use Pyramid and Twisted ?
I can imagine making a twisted protocol to get the raw HTML request. But then I can't see how to parse it into a pyramid request objects. All documented pyramid tools seems to expect some wsgi interface at some point.
I could use waitress code to parse the request and turn it into a WSGI env then pass the env to pyramid but that's a lot of work with many issues I'm sure I can't even imagine down the road.
I know twisted includes a WSGI server, but it implies synchronicity in the app code, which does not serve my purpose. I want to be able to use the request and response objects, renderers, routers, and others pyramid tools in a twisted asynchronous protocol, with an asynchronous, non blocking app code as well. Hence I won't want to use WSGI.
Twisted API is verbose, heavy and uninuitive compared to any other asynchronous toolkit you'll find in Python or even other languages. Hence the critic about its ergonomics. I can use it, but training newcomers in my team to do it has a high cost. I wish to lower it.
Indeed, it packs a lot of power that I want to use.
To elaborate on my needs, I'm building a tool using crossbar.io and cyclone to have a WAMP/HTTP framework a bit friendlier to my team that the current tools. But cyclone is not as complete as pyramid, and I was hoping pyramid components were decoupled enough that the WSGI paradigm was not enforced, so I could leverage the tremendous work they did on it. All I need is an entry point : somewhere to get the HTML, and parse it into a request objet, and somewhere to take a response object, and returns HTML to the client. I wish i don't have to write a protocol manually for this, http is tricky and I'm sure I'll get it wrong in many ways.
One precision : i don't wish to use the full pyramid framework, just some components here and there, such as rooting, cookie parsing, CSRF protection, etc. I won't use their view system for it assumes a synchronous API.
Looking at Pyramid, I can see that it expects the entire request be be parsed and turned into a request object. it also returns the response as an object as well. So a part of the problem, to hook twisted and pyramid together, is to :
get the http request text as one big chunk from twisted;
parse it into the request object somehow (couldn't find a simple function to do this, but if I can turn it into an WSGI environ + request object, pyramid can convert it to it's format).
get the pyramid response object and turn it into a generator of strings (an adaptor can be find since that's what WSGI does).
Send the response back with twisted from this generator of strings.
Alternatives can be to use something simpler than pyramid like werkzeug for the glue.

Twisted Web lets you interpret HTTP request bodies (regardless of content-type, HTML or otherwise) incrementally as they're received - but it doesn't make doing so very easy. There's a very old ticket that we never seem to make much progress on for improving this situation. Until it's resolved, there probably isn't a better answer than the one I'm about to give. This incremental HTTP request body delivery, I think, is what you're looking for here (because you said you expect requests to "be a big HTML chunk").
The hook for incremental request body handling is Request.handleContentChunk. You can see a complete demonstration of its use in my answer to Python server for streaming request body content.
This gives you the data as it arrives at the server. If you want to use Pyramid, you'll have to construct a Pyramid request that uses this data. Most of the initialization of the Pyramid request object should be straightforward (eg filling the environ dictionary with the request headers - you can take these from Request.requestHeaders). The slightly trickier part will be initializing the Pyramid request object's body - which is supposed to be a file-like object that provides synchronous access to the request body.
On the one hand, if you dispatch the request before the request body has been completely received then you avoid the cost of buffering the entire request body in memory. On the other hand, if you let application code begin to read the request body then you have to deal with the circumstance that it tries to read beyond the point in the data which has actually arrived at the server. This can be dealt with. The body file-like object is expected to present a blocking interface. All you have to do is block until the data is available.
Here's a brief (incomplete, not meant to actually work) sketch of what I mean:
# XXX Note: Queue is not actually thread-safe. Use a safer primitive.
from Queue import Queue
class Body(object):
def __init__(self):
self._buffer = Queue()
self._pending = b""
self._eof = False
def read(self, how_many):
if self._eof:
return b""
if self._pending == b"":
data = self._buffer.get()
if data is None:
self._eof = True
return b""
else:
self._pending = data
if self._pending is None:
result = self._pending[:how_many]
self._pending = self._pending[how_many:]
return result
def _add_data(self, data):
self._buffer.put(data)
You can create an instance of this type, initialize the Pyramid request object's body attribute with it, and then call _add_data on it in the Twisted Request class's handleContentChunk callback.
You could also implement this as an enhancement to Twisted's own WSGI server. For the sake of simplicity, Twisted's WSGI server does read the entire request body before dispatching the request to the WSGI application - but it doesn't have to. If this is the only problem with WSGI then it'd be better to improve the quality of the WSGI implementation and keep the interface rather than both implementing the improvement and stepping outside of the interface (tying you more closely to both Twisted and Pyramid - unnecessarily).
The second half of the problem, generating response bodies incrementally, shouldn't really be a problem. Twisted's WSGI container will write out response data as the WSGI application object yields it. Or if you use twisted.web.resource instead of the WSGI interface, you can call request.write as many times as you like, at any time you like (up until you call request.finish). The only trick is that if you want to do this you must return NOT_DONE_YET from the render method.

Related

Python oauth2client async

I am fighting with tornado and the official python oauth2client, gcloud... modules.
These modules accept an alternate http client passed with http=, as long as it has a method called request which can be called by any of these libraries, whenever an http request must be sent to google and/or to renew the access tokens using the refresh tokens.
I have created a simple class which has a self.client = AsyncHttpClient()
Then in its request method, returns self.client.fetch(...)
My goal is to be able to yield any of these libraries calls, so that tornado will execute them in asynchronously.
The thing is that they are highly dependant on what the default client - set to httplib2.Http() returns: (response, content)
I am really stuck and cannot find a clean way of making this async
If anyone already found a way, please help.
Thank you in advance
These libraries do not support asynchronous. The porting process is not always easy.
oauth2client
Depending on what you want to do maybe Tornado's GoogleOAuth2Mixin or tornado-alf will be enough.
gcloud
Since I am not aware of any Tornado/asyncio implementation of gcloud-python, so you could:
you may write it yourself. Again it's not simple transport change of Connection.http or request, all the stuff around must be able to use/yield future/coroutines.
wrap it in ThreadPoolExecutor (as #Apero mentioned). This is high level API, so any nested api calls within that yield will be executed in same thread (not using the pool). It could work well.
external app (with ProcessPoolExecutor or Popen).
When I had similar problem with AWS couple years ago, I've ended up with executing, asynchronously, CLI (Tornado + subprocess.Popen + some cli (awscli, or boto based)) and simple cases (like S3, basic EC2 operations) with plain AsyncHTTPClient.

Library needs to store object in a Web session: suggestions on API design

I'm developing an OpenID consumer library [1] that does its job in two separate Web requests:
When user requests authentication the library discovers some information from a URL provided by the user.
This information is used during another Web request when the library actually finishes authentication.
I want an advice on how best to design the library API for persisting the "discovery information" between two requests:
I could ask the caller to provide their own session object for both calls and the library would store and read its own object from it:
result = openid.request(session, url) # stores discovery in `session`
# ...
openid.authenticate(session, auth_data) # reads discovery from `session`
I don't really like it aesthetically as it looks like an Inversion of Control: bring your own buffer, we'll use it for you. Also, a "session" is not a well-defined interface and may expose some limitations. For example, sessions in Django by default only want to store JSON-serializable objects and the object representing the discovery information is of a custom class that json doesn't recognize.
I could return the discovery info to the caller and let them deal with it as they see fit:
result, discovery = openid.request(url)
session['openid.discovery'] = discovery
# ...
openid.authenticate(session['openid.discovery'], auth_data)
This makes the library cleaner (no need to invent session key names or session cleanup policies) but moves this job to the caller. Which is probably okay if the caller is another library code implementing OpenID behaviour for a Web framework, but not okay if the caller is an application developer who shouldn't be bothered with this sort of book-keeping.
Which approach is best (for any subjective definition of "best")? Is there another one?
Second approach is definitely cleaner, because:
normal return values are always better than "out" arguments
it doesn't restrict format of you session object (in fact, there may be no explicit session object at all)

Flask accept request data as a stream without processing?

I have an endpoint in my Flask application that accepts large data as the content. I would like to ensure that Flask never attempts to process this body, regardless of its content-type, and always ensures I can read it with the Rquest.stream interface.
This applies only to a couple of endpoints, not my entire application.
How can I configure this?
The Werkzeug Request object heavily relies on properties and anything that touches request data is lazily cached; e.g. only when you actually access the .form attribute would any parsing take place, with the result cached.
In other words, don't touch .files, .form, .get_data(), etc. and nothing will be sucked into memory either.

Send a GET request with a body

I'm using elasticsearch and the RESTful API supports supports reading bodies in GET requests for search criteria.
I'm currently doing
response = urllib.request.urlopen(url, data).read().decode("utf-8")
If data is present, it issues a POST, otherwise a GET. How can I force a GET despite the fact that I'm including data (which should be in the request body as per a POST)
Nb: I'm aware I can use a source property in the Url but the queries we're running are complex and the query definition is quite verbose resulting in extremely long urls (long enough that they can interfere with some older browsers and proxies).
I'm not aware of a nice way to do this using urllib. However, requests makes it trivial (and, in fact, trivial with any arbitrary verb and request content) by using the requests.request* function:
requests.request(method='get', url='localhost/test', data='some data')
Constructing a small test web server will show that the data is indeed sent in the body of the request, and that the method perceived by the server is indeed a GET.
*note that I linked to the requests.api.requests code because that's where the actual function definition lives. You should call it using requests.request(...)

Web application: Hold large object between requests

I'm working on a web application related to genome searching. This application makes use of this suffix tree library through Cython bindings. Objects of this type are large (hundreds of MB up to ~10GB) and take as long to load from disk as it takes to process them in response to a page request. I'm looking for a way to load several of these objects once on server boot and then use them for all page requests.
I have tried using a remote manager / client setup using the multiprocessing module, modeled after this demo, but it fails when the client connects with an error message that says the object is not picklable.
I would suggest writing a small Flask (or even raw WSGI… But it's probably simpler to use Flask, as it will be easier to get up and running quickly) application which loads the genome database then exposes a simple API. Something like this:
app = Flask(__name__)
database = load_database()
#app.route('/get_genomes')
def get_genomes():
return database.all_genomes()
app.run(debug=True)
Or, you know, something a bit more sensible.
Also, if you need to be handling more than one request at a time (I believe that app.run will only handle one at a time), start by threading… And if that's too slow, you can os.fork() after the database is loaded and run multiple request handlers from there (that way they will all share the same database in memory).

Categories