I'm not quite sure how to form this question with specificity, so hopefully it'll make sense..
I have a http client that uses the requests package and now I'd like to use locust to run load tests.
To use Locust properly, looks like I should extend HttpLocust, which uses locust's client for the http requests, but my class already has its own client, that makes the requests.
So I'm not sure how to use locust.. should I just use the Locust class and forget about HttpLocust?
Have you created Locust tests using the Requests packages? Any pointers?
Any other python http load test framework you recommend instead?
The HttpLocust class already uses the requests package, so you can use that instead of your client.
If you want to use your client, you should extend the Locust class. For example:
class MyHttpLocust(Locust):
def __init__(self):
super(MyHttpLocust, self).__init__()
if not self.host:
raise LocustError('host is missing')
self.client = MyHttpClient(self.host)
Related
I have a package with a bunch of classes that act as wrappers for other systems. For example:
# mysystem.py
import requests
class MySystem():
def __init__(self):
self.session = requests.session()
def login(self, username, password):
self.session.post("https://mysystem.com/", data={"username": username, "password": password})
I also have integration tests:
# test_mysystem.py
from mysystem import MySystem
def test_login():
system = MySystem()
system.login(username="test", password="P#ssw0rd")
I now want to mock out the requests to the real systems so I wouldn't depend on them to run my tests. I will use responses to register the fake responses into a fixture (I'm using pytest).
The problem is that there are many classes (and more to come), so manually collecting every desired response would be a tedious task. My ideia to automate this, since the integration tests already do the job of requesting every single page I'm interested in mocking up, is to create a fixture that would save all the responses (along with the URL) after a full run of my tests. I could later use the saved information to register my responses more easily.
But how can I monkeypatch the requests made by the sessions that are properties of each system class during testing time?
It seems like you want to do a record/replay of the network calls and responses.
I know a couple of libraries you can use:
pytest-automock
vcrpy - which has a couple of pytest/unittest wrappers: pytest-recording, vcrpy-unittest, pytest-vcr
The basic principle is the same: each library has a "record mode", where you run your code once and the actual network calls are made and serialized into some file. When the tests are run afterward - the file is deserialized and used to replay the same response instead of making the network call.
With pytest-recording, all you need is to add the vcr mark to the test you want to be recorded:
# test_mysystem.py
import pytest
from mysystem import MySystem
# Alternatively, you can add the mark to pytestmark to mark all the methods in the module
# pytestmark = pytest.mark.vcr(filter_post_data_parameters=["password"])
#pytest.mark.vcr(filter_post_data_parameters=["password"])
def test_login():
system = MySystem()
system.login(username="test", password="P#ssw0rd")
Then, the first time you run your test using pytest test_mysystem.py --record-mode once, it will proceed as usual but also record all the network interactions in yaml files under the "cassettes" folder.
The next time you run the same command above, the cassettes will be loaded and no actual request will take place. You can make sure this is the case by using the option --block-network or even disconnecting your machine from the network.
Be advised that, by default, all the transmitted data will be recorded. In your specific example, you will probably want to leave out the password. Fortunately, vcrpy supports filtering. You can see I already did that in my example by passing filter_post_data_parameters=["password"] to the mark.
I am fighting with tornado and the official python oauth2client, gcloud... modules.
These modules accept an alternate http client passed with http=, as long as it has a method called request which can be called by any of these libraries, whenever an http request must be sent to google and/or to renew the access tokens using the refresh tokens.
I have created a simple class which has a self.client = AsyncHttpClient()
Then in its request method, returns self.client.fetch(...)
My goal is to be able to yield any of these libraries calls, so that tornado will execute them in asynchronously.
The thing is that they are highly dependant on what the default client - set to httplib2.Http() returns: (response, content)
I am really stuck and cannot find a clean way of making this async
If anyone already found a way, please help.
Thank you in advance
These libraries do not support asynchronous. The porting process is not always easy.
oauth2client
Depending on what you want to do maybe Tornado's GoogleOAuth2Mixin or tornado-alf will be enough.
gcloud
Since I am not aware of any Tornado/asyncio implementation of gcloud-python, so you could:
you may write it yourself. Again it's not simple transport change of Connection.http or request, all the stuff around must be able to use/yield future/coroutines.
wrap it in ThreadPoolExecutor (as #Apero mentioned). This is high level API, so any nested api calls within that yield will be executed in same thread (not using the pool). It could work well.
external app (with ProcessPoolExecutor or Popen).
When I had similar problem with AWS couple years ago, I've ended up with executing, asynchronously, CLI (Tornado + subprocess.Popen + some cli (awscli, or boto based)) and simple cases (like S3, basic EC2 operations) with plain AsyncHTTPClient.
For testing I use pytest so it would be great if you suggest something pytest specific.
I have some code which uses the requests library. What it does is basically simple POST/GET requests for logging in, parsing data, etc.
Surely I want to test that code locally without doing any actual HTTP requests.
A monkeypatch funcarg could be the solution, but I think that mocking request.get(...) calls or directly pythons's urllib isn't good, because, for example, there are functions which do more than one HTTP request inside , so I can't just mock the request.get("anyURL") with a simple lambda *args, **kwaargs: """<html>response</html>""".
There are different URLs which should return different content. Sometimes it should be based on POST/GET data. Also I have no idea how will requests.session behave in case of direct mocking. Besides that how to emulate session termination? How to emulate a connection failure?
So in the end in my opinion it's quite hard to use monkey patching here. At least I am not able to write a good mocking function which will take into account everything. Also if I choose to mock urllib directly and someday requests library starts using something different all my tests will fail.
So the best way I think is to use actual HTTP server which turns on on a test run, and if possible takes into account pytest's scopes, etc (so it's a funcarg). While googling I found only two solutions:
https://pypi.python.org/pypi/pytest-localserver
https://github.com/kevin1024/pytest-httpbin
The first one sets up the HTTP server and serves predefined content over a specific URL. Definitely that does not work for me, because as I mentioned some functions which I intend to test do several requests so all inner HTTP requests.get() will get the same answer. Bad.
The second one as far a I see has the same problem. Or at least do not understand how to use it.
The third option could be writing a small Flask based service, but I guess I'll run into a problem that things I use in tests should be tested as well which is a bad practice.
You can rather unmock get after first call.
class Requester():
def get(*args):
...
def mock_get(requester, response):
orig_get = requester.get
def return_text_and_unmock(*args, **kwargs):
self.get = orig_get
return response
requester.get = return_text_and_unmock.__get__(requester, Requester)
return requester
I believe using a local server for unit testing is not a good idea as this is not really a unit test. I you're using requests one good way of being able to mock the requests is to use the module responses that is developed and maintained by dropbox: response dropbox. With responses you will be able to mock each request you make by specifying that you want a certain content to be return when a request is issued to a given URL. The README gives a quick overview of the module's abilities.
I'm working on a project that works with tornado's websocket functionality. I see a decent amount of documentation for working with asychronous code, but nothing on how this can be used to create unit tests that work with their WebSocket implementation.
Does tornado.testing provide the functionality to do this? If so, could someone provide a brief example of how to make it happen?
Thanks in advance.
As #Vladimir said, you can still use AsyncHTTPTestCase to create/manage the test webserver instance, but you can still test WebSockets in much the same way as you would normal HTTP requests - there's just no syntactic sugar to help you.
Tornado also has its own WebSocket client so there's no need (as far as I've seen) to use a third party client - perhaps it's a recent addition though. So try something like:
import tornado
class TestWebSockets(tornado.testing.AsyncHTTPTestCase):
def get_app(self):
# Required override for AsyncHTTPTestCase, sets up a dummy
# webserver for this test.
app = tornado.web.Application([
(r'/path/to/websocket', MyWebSocketHandler)
])
return app
#tornado.testing.gen_test
def test_websocket(self):
# self.get_http_port() gives us the port of the running test server.
ws_url = "ws://localhost:" + str(self.get_http_port()) + "/path/to/websocket"
# We need ws_url so we can feed it into our WebSocket client.
# ws_url will read (eg) "ws://localhost:56436/path/to/websocket
ws_client = yield tornado.websocket.websocket_connect(ws_url)
# Now we can run a test on the WebSocket.
ws_client.write_message("Hi, I'm sending a message to the server.")
response = yield ws_client.read_message()
self.assertEqual(response, "Hi client! This is a response from the server.")
# ...etc
Hopefully that's a good starting point anyway.
I've attempted to implement some unit tests on tornado.websocket.WebSocketHandler based handlers and got the following results:
First of all AsyncHTTPTestCase definitely has lack of web sockets support.
Still, one can use it at least to manage IOLoop and application stuff which is significant. Unfortunately, there is no WebSocket client provided with tornado, so here enter side-developed library.
Here is unit test on Web Sockets using Jef Balog's tornado websocket client.
This answer (and the question) may be of interest, I use ws4py for the client and Tornado's AsyncTestCase which simplifies the whole thing.
I have a server that has to respond to HTTP and XML-RPC requests. Right now I have an instance of SimpleXMLRPCServer, and an instance of BaseHTTPServer.HTTPServer with a custom request handler, running on different ports. I'd like to run both services on a single port.
I think it should be possible to modify the CGIXMLRPCRequestHandler class to also serve custom HTTP requests on some paths, or alternately, to use multiple request handlers based on what path is requested. I'm not really sure what the cleanest way to do this would be, though.
Use SimpleXMLRPCDispatcher class directly from your own request handler.
Is there a reason not to run a real webserver out front with url rewrites to the two ports you are usign now? It's going to make life much easier in the long run
Simplest way would be (tested for Python 3.3 but should work for 2.x with modified imports):
from http.server import SimpleHTTPRequestHandler
from xmlrpc.server import SimpleXMLRPCRequestHandler,SimpleXMLRPCServer
class MixRequestHandler(SimpleHTTPRequestHandler,SimpleXMLRPCRequestHandler):
pass
srv=SimpleXMLRPCServer(("localhost",8080),MixRequestHandler)
#normal stuff for SimpleXMLRPCServer