Python: What's the difference between httplib2 and urllib2? - python

I'm trying to implement an OAuth2 authentication server and for the client part i wanted to send a json request to the server (from a Django view) and i found several libraries to do that tho' the most common are httplib2 and urllib2 i was wondering which is the difference between them and which is the best library for this purpose.
Thanks in advance.
Edit:
After searching, i found an extremely useful library called Requests and i use this one since then. (http://docs.python-requests.org/en/latest/)

urllib2 handles opening and reading URLs. It also handles extra stuff like storing cookies.
httplib handles http requests, its what happens behind the curtain when you open a url.
you can send json request with urllib2 so you should use that.
see this.

Related

Zeep vs Requests for SOAP APIs in Python

So I know that Python's requests module can be used to handle REST APIs, and as per this answer here, requests can also handle SOAP APIs as well.
I've worked only with REST APIs so far, but of the few people I know who work with SOAP APIs, they almost always use another module zeep.
Is there anything I'm missing?
Why is there a need for a whole seperate module, when it is possible using requests as well - and more importantly, why doesn't anybody just uses requests instead of using zeep?
Ease of use and reusability. While you could use the requests module and write raw XML, Zeep is a high level interface which generates the XML automatically

HTTPError: Moved Permanently raised by urllib2.urlopen()

I am trying to send a patch request a database using urllib and urllib2 library in python 2.7 (as I cannot use requests library cause it does not work in this server and nobody has found the solution for that, so please do not suggest requests because that path is already closed).
The code look like this:
data={"name":"whatever name"}
data=urllib.urlencode(data,'utf-8')#Encoding the dictionary of the data to make the request
req=urllib2.Request(url=next_url,headers={"Authorization": auth_header,"Content-Type": "application/json"})#Creating a request object of urllib library
req.add_data=data
req.get_method = lambda: 'PATCH'
resp = urllib2.urlopen(req)
If don't assign both req.get_method=lambda: 'PATCH' , req.add_data=data the request class automatically sends a get request which has a 200 response, so I guess it does not have to do with the authorization and credentials. Using python 3 and urllib.request library works as well, so the server accept for sure PATCH requests.
I hope that anybody can find the solution... I cannot picture why this' happening.
Update SOLVED: I find the problem, to be related with the url I was making the request.
The "Moved Permanently" error would indicate that the server responded with a HTTP 301 error, meaning that the URL you are requesting has been moved to another URL (https://en.wikipedia.org/wiki/HTTP_301).
I would suggest to take a network traffic capture with tools like tcpdump or wireshark, to check the HTTP conversation and confirm . If the server is actually replying with a 301 and this is not urllib raising a wrong error code, the server response should include a "Location" header with another URL, and you should try this one instead.
Note that urllib has problems when dealing with redirects., so you might want to reconsider trying to make the "requests" module work instead.

Python 2.7 urllib2 raising urllib2.HTTPError 301 when hitting redirect with xml content

I'm using urllib2 to request a particular S3 bucket at hxxp://s3.amazonaws.com/mybucket. Amazon sends back an HTTP code of 301 along with some XML data (the redirect being to hxxp://mybucket.s3.amazonaws.com/). Instead of following the redirect, python raises urllib2.HTTPError: HTTP Error 301: Moved Permanently.
According to the official Python docs at HOWTO Fetch Internet Resources Using urllib2, "the default handlers handle redirects (codes in the 300 range)".
Is python handling this incorrectly (presumably because of the unexpected XML in the response), or am I doing something wrong? I've watched in Wireshark and the response comes back exactly the same to python's request as it does to me using a web client. In debugging, I don't see the XML being captured anywhere in the response object.
Thanks for any guidance.
Edit: Sorry for not posting the code initially. It's nothing special, literally just this -
import urllib2, httplib
request = urllib2.Request(site)
response = urllib2.urlopen(request)
You are better off using the requests library. requests handle redirection by default : http://docs.python-requests.org/en/latest/user/quickstart/#redirection-and-history
import requests
response = requests.get(site)
print(response.content)
I don't get the problem with urllib2, I tried to look into the documentation https://docs.python.org/2/library/urllib2.html but it doesn't look intuitive.
It seems that in Python3, they refactored it to make it less a burden to use, but I am still convinced that requests is the way to go.
Note The urllib2 module has been split across several modules in
Python 3 named urllib.request and urllib.error. The 2to3 tool will
automatically adapt imports when converting your sources to Python 3.

python microframeworks and the requests library

Working on a side project in python that both initiates http requests and has a small web server. It got me thinking - it seems like every python microframework has its own model for accessing properties of the http request (e.g. the underlying request object that you can use to get at the querystring parameters, headers, etc. and the underlying response object for setting the status code, response headers, etc). They all allow you access to the same data and they've all kind of re-invented the wheel.
Are there any microframeworks that use the Request and Response objects from the popular requests library instead of having their own implementation? It seems like the requests library is becoming the canonical way to do http requests in python, so this would make a framework especially minimal. It would also be cool when making apps that essentially glue other services together because forwarding/wrapping requests would be trivial. You could just change the .url attribute on the incoming request and call .prepare() to forward the request (yes, for simple forwarding doing it at the webserver level makes more sense, but you get the idea).
Or if there aren't any frameworks that do this in particular, are there any with similar benefits i.e. the incoming http request object also works as an http client?
Edit: Wanted to point out that this is how the http Request object works in Go, that's partly what inspired my question. From the net/http library "A Request represents an HTTP request received by a server or to be sent by a client."
Flask is based on Werkzeug. Werkzeug itself is using Request which itself is using BaseRequest. But this is not the Requests library.
Note that there was a plan to create an httpcore library using both Requests and Werkzeug, but that seems to have stopped. That said both projects are kicking.
Some people have modified Flask in their apps using Requests

Processing Multipart/mixed on Twisted

After having a Twisted Rest interface where JSON requests are processed, now I face the need of processing multipart/mixed requests where a JSON and a binary file will come together.
After a bit testing with curl, I've realized that request variable will be filled with a 'file' type and I'd like to know if there's any method in Twisted to handle this kind of requests properly.
Thanks
Although it's an old post, I remember solving this with email module, it anyone is interested...

Categories