Zeep vs Requests for SOAP APIs in Python - python

So I know that Python's requests module can be used to handle REST APIs, and as per this answer here, requests can also handle SOAP APIs as well.
I've worked only with REST APIs so far, but of the few people I know who work with SOAP APIs, they almost always use another module zeep.
Is there anything I'm missing?
Why is there a need for a whole seperate module, when it is possible using requests as well - and more importantly, why doesn't anybody just uses requests instead of using zeep?

Ease of use and reusability. While you could use the requests module and write raw XML, Zeep is a high level interface which generates the XML automatically

Related

python 3.x SOAP library for writing serverless webservice on AWS

I need to convert some legacy Java code that implements a SOAP server - I need to convert it from java to python 3.x and I need to deploy the SOAP webservices on AWS Lambda. There is plenty of documentation on how to create REST web services on AWS Lambda in python, but I'm finding it really hard to find any information on writing a SOAP service. There are some client libraries for python SOAP, but I can't seem to find any server-side libraries or examples of how one would do this on AWS Lambda. I found zeep, which is meant to be a client library, and pysimplesoap, but in neither case is it clear how to use these on AWS Lambda. Any suggestions?
UPDATE: So I discovered that SOAP over HTTP is simply a call to HTTP POST which passes the SOAP request message as the HTTP body. (Of course there are headers in the HTTP request too that indicate that the body is a soap/xml message). So, to deploy to AWS Lambda, I basically need to deploy an HTTP PSOT webservice, which is easy enough using Zappa and Flask (or Django). The only tricky part is inside the post() function. Inside that function I would have to:
Get the request.body (SOAP/XML)
Use some python library to extract the items of interest from the SOAP message, perhaps parsing the entire message using a python XML parser
Execute the business logic code that creates a result, say a dictionary
Use some python library to convert that result dictionary to a SOAP response and let AWS Lambda send that SOAP response back
It is parts 2 and 4 that I am not quite sure how to do. I can use xml.etree.ElementTree to parse the XML, but I suspect there might be a better/faster/simpler way. For the output, is there a way I can use something like zeep to take a dictionary of inputs and create a SOAP response message? The zeep library is meant to read the WSDL document and create request SOAP messages, not response SOAP messages, so I'm not exactly sure how to create a SOAP response message using zeep. Has anyone else done this or know of a better way to do it?

API GET requests from Specific IP - Requests Library - Python

I'm looking to switch existing PHP code over to Python using the Requests library. The PHP code sends thousands of GET requests to an API to get needed data. The API limits GET requests to one every 6 seconds per IP. We have numerous IP addresses in order to pull faster. The faster the better in this case.
My question is is there a way to send the GET request from different IP addresses using the Requests library? I'm also open to using different libraries in Python or different methods to replace the IP addresses.
The current code makes use of curl_multi_exec with the CURLOPT_INTERFACE setting.
As far as code goes, I don't necessarily need code examples. I'm looking for more of a direction or option that will allow such features in Python. I would prefer not post code, but if its necessary, let me know.
Thanks!
I don't believe Requests supports setting the outbound interface.
There is a Python cURL library, though.

Python: What's the difference between httplib2 and urllib2?

I'm trying to implement an OAuth2 authentication server and for the client part i wanted to send a json request to the server (from a Django view) and i found several libraries to do that tho' the most common are httplib2 and urllib2 i was wondering which is the difference between them and which is the best library for this purpose.
Thanks in advance.
Edit:
After searching, i found an extremely useful library called Requests and i use this one since then. (http://docs.python-requests.org/en/latest/)
urllib2 handles opening and reading URLs. It also handles extra stuff like storing cookies.
httplib handles http requests, its what happens behind the curtain when you open a url.
you can send json request with urllib2 so you should use that.
see this.

JSON-RPC server via Python

I need to implement a JSON-RPC server like this:
http://pasha.cdemo.applicationcraft.com/service/json
This server will be accessed from jQuery and I have to use Python for writing it.
What library should I use? Can you also give me an example of using that library?
Thanks.
I found cherrypy very easy to use (doesn't come with a predefined template engine or a database model, so it's IMO better than others when your server is producing json and is not a typical database).
Coupled with nginx and memcached can also be quite performant...
Python 2.6 comes with json module in the standard library which allows you to effective convert Python data structures to JSON responses.
For HTTP communications and request handling, you can use Python web frameworks like Pyramid, Django or HTTP server software like Tornado. It really much depends what do you need to process in your JSON-RPC calls.

Python library for HTTP support - including Content-Encoding

I have a scraper, which queries different websites. Some of them varyingly use Content-Encoding. And since I'm trying to simulate an AJAX query and need to mimic Mozilla, I need full support. There are multiple HTTP libraries for Python, but neither seems complete:
httplib seems pretty low level, more like a HTTP packet sniffer really.
urllib2 is some sort of elaborate hoax. There are a dozen handlers for various web client functions, but mandatory HTTP features like Content-Encoding appearantly aren't.
mechanize: is nice, already somehwat overkill for my tasks, but only supports CE 'gzip'.
httplib2: sounded most promising, but actually fails on 'deflate' encoding, because of the disparity of raw deflate and zlib streams.
So are there any other options? I can't believe I'm expected to reimplement workarounds for above libraries. And it's not a good idea to distribute patched versions alongside my application, because packagers might remove it again if the according library is available as separate distribution package.
I almost don't dare to say, but the http functions API in PHP is much nicer. And besides Content-Encoding:*, I might somewhen need multipart/form-data too. So, is there a comprehensive 3rd party library for http retrieval?
I would consider either invoking a child process of cURL or using python bindings for libcurl.
From this description cURL seems to support gzip and deflate.
Beautiful Soup might work. Just throwing it out there.

Categories