I have written a very simple tornado handler intended to test the upload speed of some devices that are deployed remotely. The main test is going to be run in said remote devices where (thanks to cURL), I can get a detailed report on the different times the upload took.
The only thing the Tornado handler has to really do is accept a body with a number of bytes (that's pretty much it)
class TestUploadHandler(tornado.web.RequestHandler):
def post(self):
logging.debug("Testing upload")
self.write("")
So, the code above works, but it is kind of... almost shameful :-D To make it a bit more... showable, I'd like to show a few more useful logs, like the time that the request took to upload or something like that. I don't know... Something a bit juicier.
Is there any way of measuring the upload speed within the Tornado handler itself? I've googled how to benchmark Tornado handlers, but all I seem to be able to find are performance comparisons between different web servers.
Thank you in advance.
Well, it's pretty straightforward to time how long the upload itself took:
import time
class TestUploadHandler(tornado.web.RequestHandler):
def post(self):
logging.debug("Testing upload")
start = time.time()
self.write({})
end = time.time()
print "Time to write was {} seconds.".format(end-start)
You could also move the timing code to a decorator, if you want to use it in more than one handler:
from functools import wrap
import time
def timer(func):
#wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
ret = func(*args, **kwargs)
end = time.time()
print 'Function took {} seconds'.format(end-start)
return ret
return wrapper
class TestUploadHandler(tornado.web.RequestHandler):
#timer
def post(self):
logging.debug("Testing upload")
self.write({})
Edit:
Given that you're trying to measure how long an upload to the server is taking from the server's perspective, the above approach isn't going to work. It looks like the closest you can get with tornado is to use the #tornado.web.stream_request_body decorator, so that you receive the request body as a stream:
#tornado.web.stream_request_body
class ValueHandler(tornado.web.RequestHandler):
def initialize(self):
self.start = None
def post(self):
end = time.time()
print self.request
if self.start:
print("Upload time %s" % end-self.start)
self.write({})
def data_received(self, data):
if not self.start:
self.start = time.time()
When the first chunk of the request body is received, we save the time (as self.start). The post method will be called as soon as the complete body is received, so we get end then.
I had issues getting this to work properly with large file uploads, though. It seems to work ok for smallish files (under 100MB), though.
Related
Flask allows users to create custom endpoint decorators, and also to stream responses (basically by returning a generator in a flask.Response).
I would like to make the two concepts work together. Here is a decorator that initializes and closes a context before and after a request is processed. (In real life, it does database connection related stuff):
def mydecorator(f):
#wraps(f)
def decorated_function(*args, **kwargs):
print("Start context")
result = f(*args, **kwargs)
print("End context")
return result
return decorated_function
And now here are two endpoints, the first one is a regular one, and the second one is a streamed one.
#app.route("/regular")
#mydecorator
def regular_endpoint():
print("start normal")
random_function_that_need_the_decorator_context()
print("end normal")
return render_template("whatever.html")
# prints: start context, start normal, end normal, end context
#app.route("/streamed")
#mydecorator
def streamed_endpoint():
def mygenerator():
print("start generator")
random_function_that_need_the_decorator_context()
for _ in range(1000):
yield "something"
print("end generator")
return Response(mygenerator())
# prints: start context, end context, start generator, end generator
The regular endpoint works as expected, but the streamed endpoint fails because the inner generator function needs the decorator context, but the decorator context is closed by the time the generator function is executed.
Is there a way to keep the decorator context opened by the time the generator is executed?
There is a stream_with_context function in flask, but it seems to only provide the flask request context. Playing with after_request does not give better results as the function is called before the generator is executed.
Flask will automatically delete the request context once a response is started on the server. This is mainly to prevent memory leaks. Therefore you have to explicitly tell it to keep it. You can do this with "stream_with_context":
from flask import stream_with_context
#app.route("/streamed")
#mydecorator
def streamed_endpoint():
def mygenerator():
print("start generator")
random_function_that_need_the_decorator_context()
for _ in range(1000):
yield "something"
print("end generator")
return Response(stream_with_context(mygenerator()))
Tested and working in Flask==1.1.1
More info can be found here
My aim is to provide to a web framework access to a Pyro daemon that has time-consuming tasks at the first loading. So far, I have managed to keep in memory (outside of the web app) a single instance of a class that takes care of the time-consuming loading at its initialization. I can also query it with my web app. The code for the daemon is:
Pyro4.expose
#Pyro4.behavior(instance_mode='single')
class Store(object):
def __init__(self):
self._store = ... # the expensive loading
def query_store(self, query):
return ... # Useful query tool to expose to the web framework.
# Not time consuming, provided self._store is
# loaded.
with Pyro4.Daemon() as daemon:
uri = daemon.register(Thing)
with Pyro4.locateNS() as ns:
ns.register('thing', uri)
daemon.requestLoop()
The issue I am having is that although a single instance is created, it is only created at the first proxy query from the web app. This is normal behavior according to the doc, but not what I want, as the first query is still slow because of the initialization of Thing.
How can I make sure the instance is already created as soon as the daemon is started?
I was thinking of creating a proxy instance of Thing in the code of the daemon, but this is tricky because the event loop must be running.
EDIT
It turns out that daemon.register() can accept either a class or an object, which could be a solution. This is however not recommended in the doc (link above) and that feature apparently only exists for backwards compatibility.
Do whatever initialization you need outside of your Pyro code. Cache it somewhere. Use the instance_creator parameter of the #behavior decorator for maximum control over how and when an instance is created. You can even consider pre-creating server instances yourself and retrieving one from a pool if you so desire? Anyway, one possible way to do this is like so:
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
def instance_creator(cls):
print("(Pyro is asking for a server instance! Creating one!)")
return cls(cached_initialized_stuff)
#Pyro4.behavior(instance_mode="percall", instance_creator=instance_creator)
class Server:
def __init__(self, init_stuff):
self.init_stuff = init_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})
But this complexity is not needed for your scenario, just initialize the thing (that takes a long time) and cache it somewhere. Instead of re-initializing it every time a new server object is created, just refer to the cached pre-initialized result. Something like this;
import Pyro4
def slow_initialization():
print("initializing stuff...")
import time
time.sleep(4)
print("stuff is initialized!")
return {"initialized stuff": 42}
cached_initialized_stuff = slow_initialization()
#Pyro4.behavior(instance_mode="percall")
class Server:
def __init__(self):
self.init_stuff = cached_initialized_stuff
#Pyro4.expose
def work(self):
print("server: init stuff is:", self.init_stuff)
return self.init_stuff
Pyro4.Daemon.serveSimple({
Server: "test.server"
})
I have to test server based on Jetty. This server can work with its own protocol, HTTP, HTTPS and lastly it started to support SPDY. I have some stress tests which are based on httplib /http.client -- each thread start with similar URL (some data in query string are variable), adds execution time to global variable and every few seconds shows some statistics. Code looks like:
t_start = time.time()
connection.request("GET", path)
resp = connection.getresponse()
t_stop = time.time()
check_response(resp)
QRY_TIMES.append(t_stop - t_start)
Client working with native protocol shares httplib API, so connection may be native, HTTPConnection or HTTPSConnection.
Now I want to add SPDY test using spdylay module. But its interface is opaque and I don't know how to change its opaqueness into something similar to httplib interface. I have made test client based on example but while 2nd argument to spdylay.urlfetch() is class name and not object I do not know how to use it with my tests. I have already add tests to on_close() method of my class which extends spdylay.BaseSPDYStreamHandler, but it is not compatibile with other tests. If it was instance I would use it outside of spdylay.urlfetch() call.
How can I use spydlay in a code that works based on httplib interfaces?
My only idea is to use global dictionary where url is a key and handler object is a value. It is not ideal because:
new queries with the same url will overwrite previous response
it is easy to forget to free handler from global dictionary
But it works!
import sys
import spdylay
CLIENT_RESULTS = {}
class MyStreamHandler(spdylay.BaseSPDYStreamHandler):
def __init__(self, url, fetcher):
super().__init__(url, fetcher)
self.headers = []
self.whole_data = []
def on_header(self, nv):
self.headers.append(nv)
def on_data(self, data):
self.whole_data.append(data)
def get_response(self, charset='UTF8'):
return (b''.join(self.whole_data)).decode(charset)
def on_close(self, status_code):
CLIENT_RESULTS[self.url] = self
def spdy_simply_get(url):
spdylay.urlfetch(url, MyStreamHandler)
data_handler = CLIENT_RESULTS[url]
result = data_handler.get_response()
del CLIENT_RESULTS[url]
return result
if __name__ == '__main__':
if '--test' in sys.argv:
spdy_response = spdy_simply_get('https://localhost:8443/test_spdy/get_ver_xml.hdb')
I hope somebody can do spdy_simply_get(url) better.
I read the official tutorial on test-driven development, but it hasn't been very helpful in my case. I've written a small library that makes extensive use of twisted.web.client.Agent and its subclasses (BrowserLikeRedirectAgent, for instance), but I've been struggling in adapting the tutorial's code to my own test cases.
I had a look at twisted.web.test.test_web, but I don't understand how to make all the pieces fit together. For instance, I still have no idea how to get a Protocol object from an Agent, as per the official tutorial
Can anybody show me how to write a simple test for some code that relies on Agent to GET and POST data? Any additional details or advice is most welcome...
Many thanks!
How about making life simpler (i.e. code more readable) by using #inlineCallbacks.
In fact, I'd even go as far as to suggest staying away from using Deferreds directly, unless absolutely necessary for performance or in a specific use case, and instead always sticking to #inlineCallbacks—this way you'll keep your code looking like normal code, while benefitting from non-blocking behavior:
from twisted.internet import reactor
from twisted.web.client import Agent
from twisted.internet.defer import inlineCallbacks
from twisted.trial import unittest
from twisted.web.http_headers import Headers
from twisted.internet.error import DNSLookupError
class SomeTestCase(unittest.TestCase):
#inlineCallbacks
def test_smth(self):
ag = Agent(reactor)
response = yield ag.request('GET', 'http://example.com/', Headers({'User-Agent': ['Twisted Web Client Example']}), None)
self.assertEquals(response.code, 200)
#inlineCallbacks
def test_exception(self):
ag = Agent(reactor)
try:
yield ag.request('GET', 'http://exampleeee.com/', Headers({'User-Agent': ['Twisted Web Client Example']}), None)
except DNSLookupError:
pass
else:
self.fail()
Trial should take care of the rest (i.e. waiting on the Deferreds returned from the test functions (#inlineCallbacks-wrapped callables also "magically" return a Deferred—I strongly suggest reading more on #inlineCallbacks if you're not familiar with it yet).
P.S. there's also a Twisted "plugin" for nosetests that enables you to return Deferreds from your test functions and have nose wait until they are fired before exiting: http://nose.readthedocs.org/en/latest/api/twistedtools.html
This is similar to what mike said, but attempts to test response handling. There are other ways of doing this, but I like this way. Also I agree that testing things that wrap Agent isn't too helpful and testing your protocol/keeping logic in your protocol is probably better anyway but sometimes you just want to add some green ticks.
class MockResponse(object):
def __init__(self, response_string):
self.response_string = response_string
def deliverBody(self, protocol):
protocol.dataReceived(self.response_string)
protocol.connectionLost(None)
class MockAgentDeliverStuff(Agent):
def request(self, method, uri, headers=None, bodyProducer=None):
d = Deferred()
reactor.callLater(0, d.callback, MockResponse(response_body))
return d
class MyWrapperTestCase(unittest.TestCase):
def setUp:(self):
agent = MockAgentDeliverStuff(reactor)
self.wrapper_object = MyWrapper(agent)
#inlineCallbacks
def test_something(self):
response_object = yield self.wrapper_object("example.com")
self.assertEqual(response_object, expected_object)
How about this? Run trial on the following. Basically you're just mocking away Agent and pretending it does as advertised, and using FakeAgent to (in this case) fail all requests. If you actually want to inject data into the transport, that would take "more doing" I guess. But are you really testing your code, then? Or Agent's?
from twisted.web import client
from twisted.internet import reactor, defer
class BidnessLogik(object):
def __init__(self, agent):
self.agent = agent
self.money = None
def make_moneee_quik(self):
d = self.agent.request('GET', 'http://no.traffic.plz')
d.addCallback(self.made_the_money).addErrback(self.no_dice)
return d
def made_the_money(self, *args):
##print "Moneeyyyy!"
self.money = True
return 'money'
def no_dice(self, fail):
##print "Better luck next time!!"
self.money = False
return 'no dice'
class FailingAgent(client.Agent):
expected_uri = 'http://no.traffic.plz'
expected_method = 'GET'
reasons = ['No Reason']
test = None
def request(self, method, uri, **kw):
if self.test:
self.test.assertEqual(self.expected_uri, uri)
self.test.assertEqual(self.expected_method, method)
self.test.assertEqual([], kw.keys())
return defer.fail(client.ResponseFailed(reasons=self.reasons,
response=None))
class TestRequest(unittest.TestCase):
def setUp(self):
self.agent = FailingAgent(reactor)
self.agent.test = self
#defer.inlineCallbacks
def test_foo(self):
bid = BidnessLogik(self.agent)
resp = yield bid.make_moneee_quik()
self.assertEqual(resp, 'no dice')
self.assertEqual(False, bid.money)
I'm learning python+tornado currently and was stopped with this problem:
i need to write some data one every few sec (for example) to client even using self.write(var)
I've tried:
time.sleep - it's blocked
yield gen.Task(IOLoop.instance().add_timeout, time.time() + ...) - great thing but I still got full request at the end of timeout
.flush - in some reason it don t want to return Bdata to client
.PeriodicCallback - browsers window just loading and loading like with another upper methods
I imagine my code like
class MaHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
#tornado.gen.engine
def get(self):
for x in xrange(10):
self.write(x)
time.sleep(5) #yes,it's no working
That's all. Thanks for any help with this. I'm solving this like 4-5 days and really can't make it by myself.
I still think it can't be done only with server side. It coud be closed.
Use the PeriodicCallback class.
class MyHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def get(self):
self._pcb = tornado.ioloop.PeriodicCallback(self._cb, 1000)
self._pcb.start()
def _cb(self):
self.write('Kapooya, Kapooya!')
self.flush()
def on_connection_close(self):
self._pcb.stop()