I'm setting up a small Python service to act as an REST API reverse proxy, but hoping there's some libraries available to help speed this process up.
Need to be able to run a function to calculate a variable to inject as a request header when the request is proxied through to the backend.
As it stands I have a simpler script to do the function to get the variable and inject it into a Nginx config file and then force a Nginx hot reload via signals, but trying to remove this dependency for what should be a fairly simple task.
Would a good approach be to use falcon as the listener and combine it with another approach to inject and forward requests?
Thanks for reading.
Edit: Been reading https://aiohttp.readthedocs.io/en/stable/ as it seems to be the right direction.
Thanks to someone over at falcon, this is now the accepted answer!
import io
import falcon
import requests
class Proxy(object):
UPSTREAM = 'https://httpbin.org'
def __init__(self):
self.session = requests.Session()
def handle(self, req, resp):
headers = dict(req.headers, Via='Falcon')
for name in ('HOST', 'CONNECTION', 'REFERER'):
headers.pop(name, None)
request = requests.Request(req.method, self.UPSTREAM + req.path,
data=req.bounded_stream.read(),
headers=headers)
prepared = request.prepare()
from_upstream = self.session.send(prepared, stream=True)
resp.content_type = from_upstream.headers.get('Content-Type',
falcon.MEDIA_HTML)
resp.status = falcon.get_http_status(from_upstream.status_code)
resp.stream = from_upstream.iter_content(io.DEFAULT_BUFFER_SIZE)
api = falcon.API()
api.add_sink(Proxy().handle)
I've got a piece of code that I can't figure out how to unit test! The module pulls content from external XML feeds (twitter, flickr, youtube, etc.) with urllib2. Here's some pseudo-code for it:
params = (url, urlencode(data),) if data else (url,)
req = Request(*params)
response = urlopen(req)
#check headers, content-length, etc...
#parse the response XML with lxml...
My first thought was to pickle the response and load it for testing, but apparently urllib's response object is unserializable (it raises an exception).
Just saving the XML from the response body isn't ideal, because my code uses the header information too. It's designed to act on a response object.
And of course, relying on an external source for data in a unit test is a horrible idea.
So how do I write a unit test for this?
urllib2 has a functions called build_opener() and install_opener() which you should use to mock the behaviour of urlopen()
import urllib2
from StringIO import StringIO
def mock_response(req):
if req.get_full_url() == "http://example.com":
resp = urllib2.addinfourl(StringIO("mock file"), "mock message", req.get_full_url())
resp.code = 200
resp.msg = "OK"
return resp
class MyHTTPHandler(urllib2.HTTPHandler):
def http_open(self, req):
print "mock opener"
return mock_response(req)
my_opener = urllib2.build_opener(MyHTTPHandler)
urllib2.install_opener(my_opener)
response=urllib2.urlopen("http://example.com")
print response.read()
print response.code
print response.msg
It would be best if you could write a mock urlopen (and possibly Request) which provides the minimum required interface to behave like urllib2's version. You'd then need to have your function/method which uses it able to accept this mock urlopen somehow, and use urllib2.urlopen otherwise.
This is a fair amount of work, but worthwhile. Remember that python is very friendly to ducktyping, so you just need to provide some semblance of the response object's properties to mock it.
For example:
class MockResponse(object):
def __init__(self, resp_data, code=200, msg='OK'):
self.resp_data = resp_data
self.code = code
self.msg = msg
self.headers = {'content-type': 'text/xml; charset=utf-8'}
def read(self):
return self.resp_data
def getcode(self):
return self.code
# Define other members and properties you want
def mock_urlopen(request):
return MockResponse(r'<xml document>')
Granted, some of these are difficult to mock, because for example I believe the normal "headers" is an HTTPMessage which implements fun stuff like case-insensitive header names. But, you might be able to simply construct an HTTPMessage with your response data.
Build a separate class or module responsible for communicating with your external feeds.
Make this class able to be a test double. You're using python, so you're pretty golden there; if you were using C#, I'd suggest either in interface or virtual methods.
In your unit test, insert a test double of the external feed class. Test that your code uses the class correctly, assuming that the class does the work of communicating with your external resources correctly. Have your test double return fake data rather than live data; test various combinations of the data and of course the possible exceptions urllib2 could throw.
Aand... that's it.
You can't effectively automate unit tests that rely on external sources, so you're best off not doing it. Run an occasional integration test on your communication module, but don't include those tests as part of your automated tests.
Edit:
Just a note on the difference between my answer and #Crast's answer. Both are essentially correct, but they involve different approaches. In Crast's approach, you use a test double on the library itself. In my approach, you abstract the use of the library away into a separate module and test double that module.
Which approach you use is entirely subjective; there's no "correct" answer there. I prefer my approach because it allows me to build more modular, flexible code, something I value. But it comes at a cost in terms of additional code to write, something that may not be valued in many agile situations.
You can use pymox to mock the behavior of anything and everything in the urllib2 (or any other) package. It's 2010, you shouldn't be writing your own mock classes.
I think the easiest thing to do is to actually create a simple web server in your unit test. When you start the test, create a new thread that listens on some arbitrary port and when a client connects just returns a known set of headers and XML, then terminates.
I can elaborate if you need more info.
Here's some code:
import threading, SocketServer, time
# a request handler
class SimpleRequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
data = self.request.recv(102400) # token receive
senddata = file(self.server.datafile).read() # read data from unit test file
self.request.send(senddata)
time.sleep(0.1) # make sure it finishes receiving request before closing
self.request.close()
def serve_data(datafile):
server = SocketServer.TCPServer(('127.0.0.1', 12345), SimpleRequestHandler)
server.datafile = datafile
http_server_thread = threading.Thread(target=server.handle_request())
To run your unit test, call serve_data() then call your code that requests a URL that looks like http://localhost:12345/anythingyouwant.
Why not just mock a website that returns the response you expect? then start the server in a thread in setup and kill it in the teardown. I ended up doing this for testing code that would send email by mocking an smtp server and it works great. Surely something more trivial could be done for http...
from smtpd import SMTPServer
from time import sleep
import asyncore
SMTP_PORT = 6544
class MockSMTPServer(SMTPServer):
def __init__(self, localaddr, remoteaddr, cb = None):
self.cb = cb
SMTPServer.__init__(self, localaddr, remoteaddr)
def process_message(self, peer, mailfrom, rcpttos, data):
print (peer, mailfrom, rcpttos, data)
if self.cb:
self.cb(peer, mailfrom, rcpttos, data)
self.close()
def start_smtp(cb, port=SMTP_PORT):
def smtp_thread():
_smtp = MockSMTPServer(("127.0.0.1", port), (None, 0), cb)
asyncore.loop()
return Thread(None, smtp_thread)
def test_stuff():
#.......snip noise
email_result = None
def email_back(*args):
email_result = args
t = start_smtp(email_back)
t.start()
sleep(1)
res.form["email"]= self.admin_email
res = res.form.submit()
assert res.status_int == 302,"should've redirected"
sleep(1)
assert email_result is not None, "didn't get an email"
Trying to improve a bit on #john-la-rooy answer, I've made a small class allowing simple mocking for unit tests
Should work with python 2 and 3
try:
import urllib.request as urllib
except ImportError:
import urllib2 as urllib
from io import BytesIO
class MockHTTPHandler(urllib.HTTPHandler):
def mock_response(self, req):
url = req.get_full_url()
print("incomming request:", url)
if url.endswith('.json'):
resdata = b'[{"hello": "world"}]'
headers = {'Content-Type': 'application/json'}
resp = urllib.addinfourl(BytesIO(resdata), header, url, 200)
resp.msg = "OK"
return resp
raise RuntimeError('Unhandled URL', url)
http_open = mock_response
#classmethod
def install(cls):
previous = urllib._opener
urllib.install_opener(urllib.build_opener(cls))
return previous
#classmethod
def remove(cls, previous=None):
urllib.install_opener(previous)
Used like this:
class TestOther(unittest.TestCase):
def setUp(self):
previous = MockHTTPHandler.install()
self.addCleanup(MockHTTPHandler.remove, previous)
I'm making HTTP requests with Python's urllib2 which go through a proxy.
proxy_handler = urllib2.ProxyHandler({'http': 'http://myproxy'})
opener = urllib2.build_opener(proxy_handler)
urllib2.install_opener(opener)
r = urllib2.urlopen('http://www.pbr.com')
I'd like to log all headers from this request. I know that using a standard HTTPHandler you can do:
handler = urllib2.HTTPHandler(debuglevel=1)
Is there something like this for ProxyHandler?
I'm pretty sure debuglevel isn't documented.
In practice, it's actually a feature of httplib that urllib2 just forwards along for convenience, so you don't have to pass lambda: httplib.HTTPConnection(debuglevel=1) in place of the default httplib.HTTPConnection as your HTTP object factory. So, you're unlikely to find anything similar in any of the other handlers.
But if you want to rely on an undocumented feature of the implementation, you're really going to need to read the source to see for yourself.
At any rate, the obvious way to add debugging to any of the handlers is to subclass them and do it yourself. For example:
class LoggingProxyHandler(urllib2.ProxyHandler):
def proxy_open(self, req, proxy, type):
had_proxy = req.has_proxy()
response = super(LoggingProxyHandler, self).proxy_open(req, proxy, type)
if not had_proxy and req.has_proxy():
# log stuff here
return response
I'm relying on internal knowledge that ProxyHandler calls set_proxy on the request if it doesn't have one and needs one. It might be cleaner to instead examine the response… but you may not get all the information you want that way.
The default Python xmlrpc.client.Transport (can be used with xmlrpc.client.ServerProxy) does not retain cookies, which are sometimes needed for cookie based logins.
For example, the following proxy, when used with the TapaTalk API (for which the login method uses cookies for authentication), will give a permission error when trying to modify posts.
proxy = xmlrpc.client.ServerProxy(URL, xmlrpc.client.Transport())
There are some solutions for Python 2 on the net, but they aren't compatible with Python 3.
How can I use a Transport that retains cookies?
Existing answer from GermainZ works only for HTTP. After a lot of time fighting with it, there is HTTPS adaptation. Note the context option which is crucial.
class CookiesTransport(xmlrpc.client.SafeTransport):
"""A SafeTransport (HTTPS) subclass that retains cookies over its lifetime."""
# Note context option - it's required for success
def __init__(self, context=None):
super().__init__(context=context)
self._cookies = []
def send_headers(self, connection, headers):
if self._cookies:
connection.putheader("Cookie", "; ".join(self._cookies))
super().send_headers(connection, headers)
def parse_response(self, response):
# This check is required if in some responses we receive no cookies at all
if response.msg.get_all("Set-Cookie"):
for header in response.msg.get_all("Set-Cookie"):
cookie = header.split(";", 1)[0]
self._cookies.append(cookie)
return super().parse_response(response)
The reason for it is that ServerProxy doesn't respect context option related to transport, if transport is specified, so we need to use it directly in Transport constructor.
Usage:
import xmlrpc.client
import ssl
transport = CookiesTransport(context=ssl._create_unverified_context())
# Note the closing slash in address as well, very important
server = xmlrpc.client.ServerProxy("https://<api_link>/", transport=transport)
# do stuff with server
server.myApiFunc({'param1': 'x', 'param2': 'y'})
This is a simple Transport subclass that will retain all cookies:
class CookiesTransport(xmlrpc.client.Transport):
"""A Transport subclass that retains cookies over its lifetime."""
def __init__(self):
super().__init__()
self._cookies = []
def send_headers(self, connection, headers):
if self._cookies:
connection.putheader("Cookie", "; ".join(self._cookies))
super().send_headers(connection, headers)
def parse_response(self, response):
for header in response.msg.get_all("Set-Cookie"):
cookie = header.split(";", 1)[0]
self._cookies.append(cookie)
return super().parse_response(response)
Usage:
proxy = xmlrpc.client.ServerProxy(URL, CookiesTransport())
Since xmlrpc.client in Python 3 has better suited hooks for this, it's much simpler than an equivalent Python 2 version.
I am using python mechanize lib and I am trying to use http PUT method on some url - but I cant find any option for this. I see only GET and POST methods...
If the PUT method is not working maybe some1 can tell me a better lib for doing this?
One possible solution:
class PutRequest(mechanize.Request):
"Extend the mechanize Request class to allow a http PUT"
def get_method(self):
return "PUT"
You can then use this when making a request like this:
browser.open(PutRequest(url,data=your_encoded_params,headers=your_headers))
NOTE: I arrived at this solution by digging into the mechanize code packages to find out where mechanize was setting the HTTP method. I noticed that when we call mechanize.Request, we are using the Request class in _request.py which in turn is extending the Request class in _urllib2_fork.py. The http method is actually set in get_method of the Request class in _urllib2_fork.py. Turns out get_method in _urllib2_fork.py was allowing only GET and POST methods. To get past this limitation, I ended up writing my own put and delete classes that extended mechanize. Request but over-rode get_method() only.
Use Requests:
>>> import requests
>>> result = requests.put("http://httpbin.org/put", data='hello')
>>> result.text
Per documentation:
requests.put(url, data=None, **kwargs)
Sends a PUT request. Returns Response object.
Parameters:
url – URL for the new Request object.
data – (optional) Dictionary or bytes to send in the body of the Request.
**kwargs – Optional arguments that request takes.
Via Mechanize:
import mechanize
import json
class PutRequest(mechanize.Request):
def get_method(self):
return 'PUT'
browser = mechanize.Browser()
browser.open(
PutRequest('http://example.com/',
data=json.dumps({'locale': 'en'}),
headers={'Content-Type': 'application/json'}))
See also http://qxf2.com/blog/python-mechanize-the-missing-manual/ (probably outdated).
Requests does it in a nicer way as Key Zhu said.