Django with mod_wsgi and gzip - python

I am using Django as a rest server. I suppose to get a POST that contains JSON that I should parse. The client is a salesforce server that is gzipping the request.
To get the request inflated, I use this in VHost:
SetInputFilter DEFLATE
Almost everything looks fine, but when I read request.body or request.read(16000) - input is pretty small - I always see chopped response (5 characters are missing).
Any suggestions where to start debugging?

Technically the WSGI specification doesn't support the concept of mutating input filters as middleware, or even within a underlying web server.
The specific issue is that mutating input filters will change the amount of request content, but will not change the CONTENT_LENGTH value in the WSGI environ dictionary.
The WSGI specification says that a valid WSGI application is only allowed to read up to CONTENT_LENGTH bytes from the request content. As a consequence, in the case of compressed request content, where the final request size will end up being greater that what CONTENT_LENGTH specifies, a web framework is likely to truncate the request input before all data is read.
You can find some details about this issue in:
http://blog.dscpl.com.au/2009/10/details-on-wsgi-10-amendmentsclarificat.html
Although changes in the specification were pushed for, nothing ever happened.
To work around the problem, what you would need to do is implement a WSGI middleware which you would wrap around the Django application, which if it detects by way of headers passed, that the original content had been compressed, but where you know Apache decompressed it, would read all request content until it reach the end of stream marker, ignoring CONTENT_LENGTH, before even passing the request to Django. Having done that, it could then change CONTENT_LENGTH and substitute wsgi.input with a replacement stream which then returns the already read content.
Because the content size could be quite large and of unknown size, reading it all into memory would not necessarily be a good idea. You therefore would likely want to read it in a block at a time and write it out to a temporary file. The wsgi.input would then be replaced with an open file handle on the temporary file and CONTENT_LENGTH replaced with the final size of the file.
If you search properly on the mod_wsgi archives on Google Groups, you should find prior discussions on this and perhaps even some example code.

Related

python azure blob storage md5 check fails on blob upload using put_block_blob_from_path

I am trying to upload a blob to azure blob storage with python sdk. I want to pass the MD5 hash for validation on the server side after upload.
Here's the code:
blob_service.put_block_blob_from_path(
container_name='container_name',
blob_name='upload_dir/'+object_name,
file_path=object_name,
content_md5=object_md5Hash
)
But I get this error:
AzureHttpError: The MD5 value specified in the request did not match with the MD5 value calculated by the server.
The file is ~200mb and the error throws instantly. Does not upload the file. So I suspect that it may be comparing the supplied hash with perhaps the hash of the first chunk or something.
Any ideas?
This is sort of an SDK bug in that we should throw a better error message rather than hitting the service, but validating the content of a large upload that has to be chunked simply doesn't work. x_ms_blob_content_md5 will store the md5 but the service will not validate it. That is something you could do on download though. content_md5 is validated by the server for the body of a particular request but since there's more than one with chunked blobs it will never work.
So, if the blob is small enough (below BLOB_MAX_DATA_SIZE) to be put in a single request, content_md5 will work fine. Otherwise I'd simply recommend using HTTPS and storing MD5 in x_ms_blob_content_md5 if you think you might want to download with HTTP and validate it on download. HTTPS already provides validation for things like bit flips on the wire so using it for upload/download will do a lot. If you can't upload/download with HTTPS for one reason or another you can consider chunking the blob yourself using the put block and put block list APIs.
FYI: In future versions we do intend to add automatic MD5 calculation for both single put and chunked operations in the library itself which will fully solve this. For the next version, we will add an improved error message if content_md5 is specified for a chunked download.
I reviewed the source code of the function put_block_blob_from_path of the Azure Blob Storage SDK. It explained the case in the function comment, please see the content below and refer to https://github.com/Azure/azure-storage-python/blob/master/azure/storage/blob/blobservice.py.
content_md5:
Optional. An MD5 hash of the blob content. This hash is used to
verify the integrity of the blob during transport. When this header
is specified, the storage service checks the hash that has arrived
with the one that was sent. If the two hashes do not match, the
operation will fail with error code 400 (Bad Request).
I think there're two things going on here.
Bug in SDK - I believe you have discovered a bug in the SDK. I looked at the source code for this function on Github and what I found is that when a large blob is uploaded in chunks, the SDK is first trying to create an empty block blob. With block blobs, this is not required. When it creates the empty block blob, it does not send any data. But you're setting content-md5 and the SDK compares the content-md5 you sent with the content-md5 of empty content and because they don't match, you get an error.
To fix the issue in the interim, please modify the source code in blobservice.py and comment out the following lines of code:
self.put_blob(
container_name,
blob_name,
None,
'BlockBlob',
content_encoding,
content_language,
content_md5,
cache_control,
x_ms_blob_content_type,
x_ms_blob_content_encoding,
x_ms_blob_content_language,
x_ms_blob_content_md5,
x_ms_blob_cache_control,
x_ms_meta_name_values,
x_ms_lease_id,
)
I have created a new issue on Github for this: https://github.com/Azure/azure-storage-python/issues/99.
Incorrect Usage - I noticed that you're passing the md5 hash of the file in content_md5 parameter. This will not work for you. You should actually pass md5 hash in x_ms_blob_content_md5 parameter. So your call should be:
blob_service.put_block_blob_from_path(
container_name='container_name',
blob_name='upload_dir/'+object_name,
file_path=object_name,
x_ms_blob_content_md5=object_md5Hash
)

Can you hook pyramid into twisted, skipping the wsgi part?

Given that :
WSGI doesn't play very well with async.
Twisted ergonomics suck.
Pyramid is very clean and component oriented.
How could I use Pyramid and Twisted ?
I can imagine making a twisted protocol to get the raw HTML request. But then I can't see how to parse it into a pyramid request objects. All documented pyramid tools seems to expect some wsgi interface at some point.
I could use waitress code to parse the request and turn it into a WSGI env then pass the env to pyramid but that's a lot of work with many issues I'm sure I can't even imagine down the road.
I know twisted includes a WSGI server, but it implies synchronicity in the app code, which does not serve my purpose. I want to be able to use the request and response objects, renderers, routers, and others pyramid tools in a twisted asynchronous protocol, with an asynchronous, non blocking app code as well. Hence I won't want to use WSGI.
Twisted API is verbose, heavy and uninuitive compared to any other asynchronous toolkit you'll find in Python or even other languages. Hence the critic about its ergonomics. I can use it, but training newcomers in my team to do it has a high cost. I wish to lower it.
Indeed, it packs a lot of power that I want to use.
To elaborate on my needs, I'm building a tool using crossbar.io and cyclone to have a WAMP/HTTP framework a bit friendlier to my team that the current tools. But cyclone is not as complete as pyramid, and I was hoping pyramid components were decoupled enough that the WSGI paradigm was not enforced, so I could leverage the tremendous work they did on it. All I need is an entry point : somewhere to get the HTML, and parse it into a request objet, and somewhere to take a response object, and returns HTML to the client. I wish i don't have to write a protocol manually for this, http is tricky and I'm sure I'll get it wrong in many ways.
One precision : i don't wish to use the full pyramid framework, just some components here and there, such as rooting, cookie parsing, CSRF protection, etc. I won't use their view system for it assumes a synchronous API.
Looking at Pyramid, I can see that it expects the entire request be be parsed and turned into a request object. it also returns the response as an object as well. So a part of the problem, to hook twisted and pyramid together, is to :
get the http request text as one big chunk from twisted;
parse it into the request object somehow (couldn't find a simple function to do this, but if I can turn it into an WSGI environ + request object, pyramid can convert it to it's format).
get the pyramid response object and turn it into a generator of strings (an adaptor can be find since that's what WSGI does).
Send the response back with twisted from this generator of strings.
Alternatives can be to use something simpler than pyramid like werkzeug for the glue.
Twisted Web lets you interpret HTTP request bodies (regardless of content-type, HTML or otherwise) incrementally as they're received - but it doesn't make doing so very easy. There's a very old ticket that we never seem to make much progress on for improving this situation. Until it's resolved, there probably isn't a better answer than the one I'm about to give. This incremental HTTP request body delivery, I think, is what you're looking for here (because you said you expect requests to "be a big HTML chunk").
The hook for incremental request body handling is Request.handleContentChunk. You can see a complete demonstration of its use in my answer to Python server for streaming request body content.
This gives you the data as it arrives at the server. If you want to use Pyramid, you'll have to construct a Pyramid request that uses this data. Most of the initialization of the Pyramid request object should be straightforward (eg filling the environ dictionary with the request headers - you can take these from Request.requestHeaders). The slightly trickier part will be initializing the Pyramid request object's body - which is supposed to be a file-like object that provides synchronous access to the request body.
On the one hand, if you dispatch the request before the request body has been completely received then you avoid the cost of buffering the entire request body in memory. On the other hand, if you let application code begin to read the request body then you have to deal with the circumstance that it tries to read beyond the point in the data which has actually arrived at the server. This can be dealt with. The body file-like object is expected to present a blocking interface. All you have to do is block until the data is available.
Here's a brief (incomplete, not meant to actually work) sketch of what I mean:
# XXX Note: Queue is not actually thread-safe. Use a safer primitive.
from Queue import Queue
class Body(object):
def __init__(self):
self._buffer = Queue()
self._pending = b""
self._eof = False
def read(self, how_many):
if self._eof:
return b""
if self._pending == b"":
data = self._buffer.get()
if data is None:
self._eof = True
return b""
else:
self._pending = data
if self._pending is None:
result = self._pending[:how_many]
self._pending = self._pending[how_many:]
return result
def _add_data(self, data):
self._buffer.put(data)
You can create an instance of this type, initialize the Pyramid request object's body attribute with it, and then call _add_data on it in the Twisted Request class's handleContentChunk callback.
You could also implement this as an enhancement to Twisted's own WSGI server. For the sake of simplicity, Twisted's WSGI server does read the entire request body before dispatching the request to the WSGI application - but it doesn't have to. If this is the only problem with WSGI then it'd be better to improve the quality of the WSGI implementation and keep the interface rather than both implementing the improvement and stepping outside of the interface (tying you more closely to both Twisted and Pyramid - unnecessarily).
The second half of the problem, generating response bodies incrementally, shouldn't really be a problem. Twisted's WSGI container will write out response data as the WSGI application object yields it. Or if you use twisted.web.resource instead of the WSGI interface, you can call request.write as many times as you like, at any time you like (up until you call request.finish). The only trick is that if you want to do this you must return NOT_DONE_YET from the render method.

Flask accept request data as a stream without processing?

I have an endpoint in my Flask application that accepts large data as the content. I would like to ensure that Flask never attempts to process this body, regardless of its content-type, and always ensures I can read it with the Rquest.stream interface.
This applies only to a couple of endpoints, not my entire application.
How can I configure this?
The Werkzeug Request object heavily relies on properties and anything that touches request data is lazily cached; e.g. only when you actually access the .form attribute would any parsing take place, with the result cached.
In other words, don't touch .files, .form, .get_data(), etc. and nothing will be sucked into memory either.

Check size of HTTP POST without saving to disk

Is there a way to check the size of the incoming POST in Pyramid, without saving the file to disk and using the os module?
You should be able to check the request.content_length. WSGI does not support streaming the request body so content length must be specified. If you ever access request.body, request.params or request.POST it will read the content and save it to disk.
The best way to handle this, however, is as close to the client as possible. Meaning if you are running behind a proxy of any sort, have that proxy reject requests that are too large. Once it gets to Python, something else may have already stored the request to disk.

image download problem (python)

I was trying to download images with multi-thread, which has a limited max_count in python.
Each time a download_thread is started, I leave it alone and activate another one. I hope the download process could be ended in 5s, which means downloading is failed if opening the url costs more than 5s.
But how can I know it and stop the failed thread???
Can you tell which version of python you are using?
Maybe you could have posted a snippet too.
From Python 2.6, you have a timeout added in urllib2.urlopen.
Hope this will help you. It's from the python docs.
urllib2.urlopen(url[, data][,
timeout]) Open the URL url, which can
be either a string or a Request
object.
Warning HTTPS requests do not do any
verification of the server’s
certificate. data may be a string
specifying additional data to send to
the server, or None if no such data is
needed. Currently HTTP requests are
the only ones that use data; the HTTP
request will be a POST instead of a
GET when the data parameter is
provided. data should be a buffer in
the standard
application/x-www-form-urlencoded
format. The urllib.urlencode()
function takes a mapping or sequence
of 2-tuples and returns a string in
this format. urllib2 module sends
HTTP/1.1 requests with
Connection:close header included.
The optional timeout parameter
specifies a timeout in seconds for
blocking operations like the
connection attempt (if not specified,
the global default timeout setting
will be used). This actually only
works for HTTP, HTTPS and FTP
connections.
This function returns a file-like
object with two additional methods:
geturl() — return the URL of the
resource retrieved, commonly used to
determine if a redirect was followed
info() — return the meta-information
of the page, such as headers, in the
form of an mimetools.Message instance
(see Quick Reference to HTTP Headers)
Raises URLError on errors.
Note that None may be returned if no
handler handles the request (though
the default installed global
OpenerDirector uses UnknownHandler to
ensure this never happens).
In addition, default installed
ProxyHandler makes sure the requests
are handled through the proxy when
they are set.
Changed in version 2.6: timeout was
added.

Categories