Executing an asynchronous test in Python - python

I need to do the following test:
send a GET request to a server (http://remote/...)
wait for the server to send a POST request in response (http://local/...)
parse the POST data and do some assertions
Selenium does not fit this case: it can't listen to connections, and I can send a GET without Selenium as well.
so, I make a unit test:
class MobiMoneyTestCase(TestCase):
def test_can_send_response(self):
resp = requests.post('http://url/api/', data={'callback': 'http://localhost:8000'})
class Handler(SimpleHTTPRequestHandler):
def do_GET(self):
assert self.path == '...'
httpd = SocketServer.ThreadingTCPServer(('localhost', 8000),Handler)
The test has to wait 5 seconds for the POST request and then fail if nothing happened. How can I merge these items in the test? If I put sleep(5) in the test_can..., the httpd handler does not reply until the countdown ends.

Basically you want to timeout a process if it's too long ? You should check out the signal module in that case.
There is an neat implementation (with decorator) here : Timeout function if it takes too long to finish

Related

How can i do print string by time while i running http_serve_forever() in python

I am tring to run the httpserver on the raspberrypi to connect and send data via cellphone. I can send the data and run other function, but now I would like to print the string at 12 every day. I've tried sockettime and date_time_string
Here is my code:
from http.server import BaseHTTPRequestHandler, HTTPServer
import random
import os
from datetime import datetime, timedelta
class RequestHandler_httpd(BaseHTTPRequestHandler):
def do_GET(self):
global Request, test, data, case
messagetosend = bytes('test', "utf")
self.send_response(200)
self.send_header('Content-Type', 'text/plain')
self.send_header('Content-Length', len(messagetosend))
self.end_headers()
self.wfile.write(messagetosend)
Request = self.requestline
Request = Request[5: int(len(Request)-9)]
return
def date_time_string(self, timestamp=None):
if __name__ == '__main__':
server_address_httpd = ('192.168.66.19', 8080)
httpd = HTTPServer(server_address_httpd, RequestHandler_httpd)
print('start')
httpd.serve_forever()
Any help would be appreciated.
I see five options:
1.) run your web server and add on the same machine a cronjob, that accesses the required url (for example with wget)
2.) run your web server and add a cronjob on another machine
3.) don't use a web server at all, but just use a cron job
4.) Depending on the framework you're using for the web server you might add a thread programming a timer, executing the job and reprogramming the timer.
In general However I try to avoid adding threads to a web server. You have to be careful to not do things, that are not thread safe and this can be tricky depending on your framework. but for some use cases it could be a simple solution.
5.) almost the same like for, but simulating an http request to your own url, which will probably avoid any race condition, which you might encounter with 4.

python tornado async client

I created batch delayed http (async) client which allows to trigger multiple async http requests and most importantly it allows to delay the start of requests so for example 100 requests are not triggered at a time.
But it has an issue. The http .fetch() method has a handleMethod parameter which handles the response, but I found out that if the delay (sleep) after the fetch isn't long enough the handle method is not even triggered. (maybe the request is killed or what meanwhile).
It is probably related to .run_sync method. How to fix that? I want to put delays but dont want this issue happen.
I need to parse the response regardless how long the request takes, regardless the following sleep call (that call has another reason as i said, and should not be related to response handling at all)
class BatchDelayedHttpClient:
def __init__(self, requestList):
# class members
self.httpClient = httpclient.AsyncHTTPClient()
self.requestList = requestList
ioloop.IOLoop.current().run_sync(self.execute)
#gen.coroutine
def execute(self):
print("exec start")
for request in self.requestList:
print("requesting " + request["url"])
self.httpClient.fetch(request["url"], request["handleMethod"], method=request["method"], headers=request["headers"], body=request["body"])
yield gen.sleep(request["sleep"])
print("exec end")

Processing a long request in Tornado never finishes

I have the following HTTP server written using Tornado:
def reindex(index):
# After some initialization, we execute a process and wait for its output
result = subprocess.check_output([indexerBinPath, arg])
class ReindexRequestHandler(tornado.web.RequestHandler):
#tornado.web.asynchronous
def post(self):
reindexRequest = json.loads(self.request.body)
p = self.application.settings.get('pool')
p.apply_async(reindex, [ reindexRequest['IndexName'] ], callback = self.onIndexingFinished)
def onIndexingFinished(self, output):
self.flush()
self.finish()
logger.info('Async callback: finished')
application = tornado.web.Application([
(r"/reindex", ReindexRequestHandler)
], pool = Pool(8), queue = Queue())
if __name__ == "__main__":
application.listen(8625)
try:
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
In the POST handler, I asynchronously execute the reindex function which in turn launches a process and wait for it to finish. That works fine - the process is always executed correctly. The process may, depending on its arguments, take up to several minutes to finish. If it completes within seconds, everything works fine.
However, when it takes e.g. over 3 minutes to complete, the HTTP client which sent the POST request never gets the answer. From the standpoint of the server, it looks ok - I can see Async callback: finished logged. However, the HTTP client waits indefinitely for the response (until it fails with a timeout). I had tried both Fiddler's request composer and the .NET HttpClient class.
Why does the HTTP client never gets the response if the request takes long to process?
I had a similar handler and the self.finish() will trigger the response back to the client. So if you move that line to above your p.apply_async it ought to work as you intend.

Set a request timeout

I want to set a time limit to a request, so that, if the queue is down, the client doesn't have to wait a long time to get a connection error. For now, when I make a request to queue that is down, the applications hangs lots of time before I get an exceptio.
I tried to set time_limit, soft_time_limit, timeout and soft_timeout in the client requests but none of them worked.
How I do set a timeout that a request can wait to get a response, before it can fail?
Here is the code that I use to call.
task = clusterWorking.apply_async(queue=q, soft_time_limit=2, time_limit=5)
task = clusterWorking.apply_async(queue=q, timeout=1, soft_timeout=1)
Here is the server code.
#task(name='manager.pingdaemon.clusterWorking')
def clusterWorking():
return "up"
You can use get(timeout)
http://celery.readthedocs.org/en/latest/reference/celery.result.html?highlight=get#celery.result.ResultSet.get
try:
task = clusterWorking.apply_async(queue=q).get(5)
except TimeoutError:
pass

Gevent async server with blocking requests

I have what I would think is a pretty common use case for Gevent. I need a UDP server that listens for requests, and based on the request submits a POST to an external web service. The external web service essentially only allows one request at a time.
I would like to have an asynchronous UDP server so that data can be immediately retrieved and stored so that I don't miss any requests (this part is easy with the DatagramServer gevent provides). Then I need some way to send requests to the external web service serially, but in such a way that it doesn't ruin the async of the UDP server.
I first tried monkey patching everything and what I ended up with was a quick solution, but one in which my requests to the external web service were not rate limited in any way and which resulted in errors.
It seems like what I need is a single non-blocking worker to send requests to the external web service in serial while the UDP server adds tasks to the queue from which the non-blocking worker is working.
What I need is information on running a gevent server with additional greenlets for other tasks (especially with a queue). I've been using the serve_forever function of the DatagramServer and think that I'll need to use the start method instead, but haven't found much information on how it would fit together.
Thanks,
EDIT
The answer worked very well. I've adapted the UDP server example code with the answer from #mguijarr to produce a working example for my use case:
from __future__ import print_function
from gevent.server import DatagramServer
import gevent.queue
import gevent.monkey
import urllib
gevent.monkey.patch_all()
n = 0
def process_request(q):
while True:
request = q.get()
print(request)
print(urllib.urlopen('https://test.com').read())
class EchoServer(DatagramServer):
__q = gevent.queue.Queue()
__request_processing_greenlet = gevent.spawn(process_request, __q)
def handle(self, data, address):
print('%s: got %r' % (address[0], data))
global n
n += 1
print(n)
self.__q.put(n)
self.socket.sendto('Received %s bytes' % len(data), address)
if __name__ == '__main__':
print('Receiving datagrams on :9000')
EchoServer(':9000').serve_forever()
Here is how I would do it:
Write a function taking a "queue" object as argument; this function will continuously process items from the queue. Each item is supposed to be a request for the web service.
This function could be a module-level function, not part of your DatagramServer instance:
def process_requests(q):
while True:
request = q.get()
# do your magic with 'request'
...
in your DatagramServer, make the function running within a greenlet (like a background task):
self.__q = gevent.queue.Queue()
self.__request_processing_greenlet = gevent.spawn(process_requests, self.__q)
when you receive the UDP request in your DatagramServer instance, you push the request to the queue
self.__q.put(request)
This should do what you want. You still call 'serve_forever' on DatagramServer, no problem.

Categories