I want to do conformance testing of thttpd server. I need to use the python scripts to test it.
Can you please share a script to test the transmission and reception of data to the server?
Also, what kind of possible tests need to be performed? Are there any specific parameters to be tested?
This can be done simply using the builtin urllib
urllib.urlopen(yourserveraddress).read()
You can also do other things with urllib2 that allow you to test more functionality.
If you want some more intence tests then you might want to build a twisted reactor to test all your functionality.
Related
We have made an python client which is used as an interface for user. some function is defined in the client which internally calls the APIs and give output to users.
My requirement is to automate the python client - functions and validate the output.
Please suggest tools to use.
There are several ways to do that:
You can write multiple tests for your application as the test cases which are responsible to call your functions and get the result and validate them. It calls the "feature test". To do that, you can use the python "unittest" library and call the tests periodically.
If you have a web application you can use "selenium" to make automatic test flows. (Also you can run it in a docker container virtually)
The other solution is to write another python application to call your functions or send requests everywhere you want to get the specific data and validate them. (It's the same with the two other solutions with a different implementation)
The most straightforward way is using Python for this, the simplest solution would be a library like pytest. More comprehensive option would be something like Robot framework
Given you have jmeter in your tags I assume that at some point you will want to make a performance test, however it might be easier to use Locust for this as it's pure Python load testing framework.
If you still want to use JMeter it's possible to call Python programs using OS Process Sampler
I am currently learning building a SOAP web services with django and spyne. I have successfully tested my model using unit test. However, when I tried to test all those #rpc functions, I have no luck there at all.
What I have tried in testing those #rpc functions:
1. Get dummy data in model database
2. Start a server at localhost:8000
3. Create a suds.Client object that can communicate with localhost:8000
4. Try to invoke #rpc functions from the suds.Client object, and test if the output matches what I expected.
However, when I run the test, I believe the test got blocked by the running server at localhost:8000 thus no test code can be run while the server is running.
I tried to make the server run on a different thread, but that messed up my test even more.
I have searched as much as I could online and found no materials that can answer this question.
TL;DR: how do you test #rpc functions using unit test?
I believe if you are using a service inside a test, that test should not be a unit test.
you might want to consider use factory_boy or mock, both of them are python modules to mock or fake a object, for instance, to fake a object to give a response to your rpc call.
For testing I use pytest so it would be great if you suggest something pytest specific.
I have some code which uses the requests library. What it does is basically simple POST/GET requests for logging in, parsing data, etc.
Surely I want to test that code locally without doing any actual HTTP requests.
A monkeypatch funcarg could be the solution, but I think that mocking request.get(...) calls or directly pythons's urllib isn't good, because, for example, there are functions which do more than one HTTP request inside , so I can't just mock the request.get("anyURL") with a simple lambda *args, **kwaargs: """<html>response</html>""".
There are different URLs which should return different content. Sometimes it should be based on POST/GET data. Also I have no idea how will requests.session behave in case of direct mocking. Besides that how to emulate session termination? How to emulate a connection failure?
So in the end in my opinion it's quite hard to use monkey patching here. At least I am not able to write a good mocking function which will take into account everything. Also if I choose to mock urllib directly and someday requests library starts using something different all my tests will fail.
So the best way I think is to use actual HTTP server which turns on on a test run, and if possible takes into account pytest's scopes, etc (so it's a funcarg). While googling I found only two solutions:
https://pypi.python.org/pypi/pytest-localserver
https://github.com/kevin1024/pytest-httpbin
The first one sets up the HTTP server and serves predefined content over a specific URL. Definitely that does not work for me, because as I mentioned some functions which I intend to test do several requests so all inner HTTP requests.get() will get the same answer. Bad.
The second one as far a I see has the same problem. Or at least do not understand how to use it.
The third option could be writing a small Flask based service, but I guess I'll run into a problem that things I use in tests should be tested as well which is a bad practice.
You can rather unmock get after first call.
class Requester():
def get(*args):
...
def mock_get(requester, response):
orig_get = requester.get
def return_text_and_unmock(*args, **kwargs):
self.get = orig_get
return response
requester.get = return_text_and_unmock.__get__(requester, Requester)
return requester
I believe using a local server for unit testing is not a good idea as this is not really a unit test. I you're using requests one good way of being able to mock the requests is to use the module responses that is developed and maintained by dropbox: response dropbox. With responses you will be able to mock each request you make by specifying that you want a certain content to be return when a request is issued to a given URL. The README gives a quick overview of the module's abilities.
I am not exposed to many of the testing framework, and wonder any recommendation on achieving the following (functional testing) during development phase. Intention is to test a web application functionality (language agnostic?) though the exposed http (REST/JSON RPC) interface.
My backend in NOT written in Python, but because of the easiness of using requests library, and creating Ad hoc http request, I simply construct http POST/GET request with appropriate cookie, payload etc and check the response to validate the server correctness.
It is little tedious to enable specific test cases (comment out / boolean flag ), and verify the results. Any framework to make this more pleasant during the development phase where frequent changes are the norm.
thanks.
Well your on the right track with requests you could tie that directly into nose or unittest or any of the common python testing frameworks that exist, bit of background requests was actually written for testing flask
Use nose to run your tests. In this case, you can declare base classes of your tests to be like this:
class SlowTestBase(BaseTestCase):
slow = True
And run it like nosetests --attr="slow", or to exclude them --attr="!slow". You can find more on nose documentation at https://nose.readthedocs.org/en/latest/
I'm searching for a good way to stress test a web application. Basically I'm searching für something like ab with a scriptable interface. Ideally I want to define some tasks, that simulate different action on the webapp (register a account, login, search, etc.) and the tool runs a hole bunch of processes that executes these tasks*. As result I would like something like "average request time", "slowest request (per uri)", etc.
*: To be independed from the client bandwith I will run theses test from some EC2 instances so in a perfect world the tool will already support this - otherwise I will script is using boto.
If you're familiar with the python requests package, locust is very easy to write load tests in.
http://locust.io/
I've used it to write all of our perf tests in it.
You can maybe look onto these tools:
palb (Python Apache-Like Benchmark Tool) - HTTP benchmark tool with command line interface resembles ab.
It lacks the advanced features of ab, but it supports multiple URLs (from arguments, files, stdin, and Python code).
Multi-Mechanize - Performance Test Framework in Python
Multi-Mechanize is an open source framework for performance and load testing.
Runs concurrent Python scripts to generate load (synthetic transactions) against a remote site or service.
Can be used to generate workload against any remote API accessible from Python.
Test output reports are saved as HTML or JMeter-compatible XML.
Pylot (Python Load Tester) - Web Performance Tool
Pylot is a free open source tool for testing performance and scalability of web services.
It runs HTTP load tests, which are useful for capacity planning, benchmarking, analysis, and system tuning.
Pylot generates concurrent load (HTTP Requests), verifies server responses, and produces reports with metrics.
Tests suites are executed and monitored from a GUI or shell/console.
( Pylot on GoogleCode )
The Grinder
Default script language is Jython.
Pretty compact how-to guide.
Tsung
Maybe a bit unusual for the first use but really good for stress-testing.
Step-by-step guide.
+1 for locust.io in answer above.
I would recommend JMeter.
See: http://jmeter.apache.org/
You can setup JMeter as proxy of your browser to record actions like login and then stress test your web application. You can also write scripts to it.
Don't forget FunkLoad, it's very easy to use