Processing Multipart/mixed on Twisted - python

After having a Twisted Rest interface where JSON requests are processed, now I face the need of processing multipart/mixed requests where a JSON and a binary file will come together.
After a bit testing with curl, I've realized that request variable will be filled with a 'file' type and I'd like to know if there's any method in Twisted to handle this kind of requests properly.
Thanks

Although it's an old post, I remember solving this with email module, it anyone is interested...

Related

Zeep vs Requests for SOAP APIs in Python

So I know that Python's requests module can be used to handle REST APIs, and as per this answer here, requests can also handle SOAP APIs as well.
I've worked only with REST APIs so far, but of the few people I know who work with SOAP APIs, they almost always use another module zeep.
Is there anything I'm missing?
Why is there a need for a whole seperate module, when it is possible using requests as well - and more importantly, why doesn't anybody just uses requests instead of using zeep?
Ease of use and reusability. While you could use the requests module and write raw XML, Zeep is a high level interface which generates the XML automatically

Which response goes with which request?

I'm using Python with OpenERP 7.
I fire requests to partners using urllib3. Some of these requests may be asynchronous.
So I've built a little asyncore server to wait for responses.
But the thing is, the server cannot know which response goes with which request.
In the content of my request, I have a tag named TransactionID.
So far, I tried to link responses with requests using this TransactionID.
But the responses are not the same from one partner to another.
So, what I've done is create a list of possible TransactionID tag structures.
This method works, but it's so ugly.
I was hopping for a better, cleaner solution, if someone knows how to achieve that.
Thanks !
Edit:
I've made a mistake by calling it asynchronous I think.
The partner gives a synchronous response. But it's just to confirm that my request is ok.
Later, the partner will send me a response on a specific url:port on my server. This is the response I'm talking about. Sorry if I've not given enough details.

Python: What's the difference between httplib2 and urllib2?

I'm trying to implement an OAuth2 authentication server and for the client part i wanted to send a json request to the server (from a Django view) and i found several libraries to do that tho' the most common are httplib2 and urllib2 i was wondering which is the difference between them and which is the best library for this purpose.
Thanks in advance.
Edit:
After searching, i found an extremely useful library called Requests and i use this one since then. (http://docs.python-requests.org/en/latest/)
urllib2 handles opening and reading URLs. It also handles extra stuff like storing cookies.
httplib handles http requests, its what happens behind the curtain when you open a url.
you can send json request with urllib2 so you should use that.
see this.

Python library for HTTP support - including Content-Encoding

I have a scraper, which queries different websites. Some of them varyingly use Content-Encoding. And since I'm trying to simulate an AJAX query and need to mimic Mozilla, I need full support. There are multiple HTTP libraries for Python, but neither seems complete:
httplib seems pretty low level, more like a HTTP packet sniffer really.
urllib2 is some sort of elaborate hoax. There are a dozen handlers for various web client functions, but mandatory HTTP features like Content-Encoding appearantly aren't.
mechanize: is nice, already somehwat overkill for my tasks, but only supports CE 'gzip'.
httplib2: sounded most promising, but actually fails on 'deflate' encoding, because of the disparity of raw deflate and zlib streams.
So are there any other options? I can't believe I'm expected to reimplement workarounds for above libraries. And it's not a good idea to distribute patched versions alongside my application, because packagers might remove it again if the according library is available as separate distribution package.
I almost don't dare to say, but the http functions API in PHP is much nicer. And besides Content-Encoding:*, I might somewhen need multipart/form-data too. So, is there a comprehensive 3rd party library for http retrieval?
I would consider either invoking a child process of cURL or using python bindings for libcurl.
From this description cURL seems to support gzip and deflate.
Beautiful Soup might work. Just throwing it out there.

Client Digest Authentication Python with URLLIB2 will not remember Authorization Header Information

I am trying to use Python to write a client that connects to a custom http server that uses digest authentication. I can connect and pull the first request without problem. Using TCPDUMP (I am on MAC OS X--I am both a MAC and a Python noob) I can see the first request is actually two http requests, as you would expect if you are familiar with RFC2617. The first results in the 401 UNAUTHORIZED. The header information sent back from the server is correctly used to generate headers for a second request with some custom Authorization header values which yields a 200 OK response and the payload.
Everything is great. My HTTPDigestAuthHandler opener is working, thanks to urllib2.
In the same program I attempt to request a second, different page, from the same server. I expect, per the RFC, that the TCPDUMP will show only one request this time, using almost all the same Authorization Header information (nc should increment).
Instead it starts from scratch and first gets the 401 and regenerates the information needed for a 200.
Is it possible with urllib2 to have subsequent requests with digest authentication recycle the known Authorization Header values and only do one request?
[Re-read that a couple times until it makes sense, I am not sure how to make it any more plain]
Google has yielded surprisingly little so I guess not. I looked at the code for urllib2.py and its really messy (comments like: "This isn't a fabulous effort"), so I wouldn't be shocked if this was a bug. I noticed that my Connection Header is Closed, and even if I set it to keepalive, it gets overwritten. That led me to keepalive.py but that didn't work for me either.
Pycurl won't work either.
I can hand code the entire interaction, but I would like to piggy back on existing libraries where possible.
In summary, is it possible with urllib2 and digest authentication to get 2 pages from the same server with only 3 http requests executed (2 for first page, 1 for second).
If you happen to have tried this before and already know its not possible please let me know. If you have an alternative I am all ears.
Thanks in advance.
Although it's not available out of the box, urllib2 is flexible enough to add it yourself. Subclass HTTPDigestAuthHandler, hack it (retry_http_digest_auth method I think) to remember authentication information and define an http_request(self, request) method to use it for all subsequent requests (add WWW-Authenticate header).

Categories