I'm designing an automated test suite to simulate a client which logs in to the backend via api rest and then opens up a websocket communication. I have to test different features over REST and Websocket.
Currently I'm performing each websocket test like this:
-The client logs in
-The ws communication starts
-It sends a ws message and awaits a response
-It checks the respose schema and asserts the result
-The ws connection is closed
-The test end
My problem:
When I run multiple websocket tests as I described above, they open and close the websocket communication several times and my test client ends up being "Blacklisted" because of the irregular behaveour, and from there is not able to reconnect via ws for a considerable time period.
My question:
How do I open a websocket connection and keep it open and active across all my tests?
I'm using pytest with "requests" module for api calls and "websockets" module for ws communication.
I've tried to split the python process into two sub-processes with "multiprosessig" module but I'm quite lost here because I'm not able yet to commuicate the pytest process with the websocket process to send the messages and retrive the responses
My websocket connection logic is the following code:
async def websocket_connection(device: Device, cmd_list: list[WebsocketMsg] = None):
init_cmd = WsInitCommand(device)
cmd_list.insert(0, init_cmd)
async def wait_for_correct_response(ws_connection, msj_id: str) -> dict:
response_received = False
ws_response: dict = {}
while not response_received:
ws_response = json.loads(await ws_connection.recv())
if 'id' in ws_response and ws_response['id'] == msj_id:
response_received = True
return ws_response
async with websockets.connect(init_cmd.url, subprotocols=init_cmd.sub_protocols) as websocket:
for cmd in cmd_list:
await websocket.send(str(cmd.message))
msg_response: dict = await wait_for_correct_response(websocket, cmd.msg_id)
return True
Use pytest fixture on a session scope to share singleton websocket connection across tests https://docs.pytest.org/en/stable/reference/fixtures.html#higher-scoped-fixtures-are-executed-first.
Omit multiprocesses and splitting into two processes as it would bring additional complexity and will be tricky to implement.
UPD: answering the comment with this example
import pytest
class WS:
pass
#pytest.fixture(scope="session")
def websocket_conn():
ws = WS()
# establish connection
print("This is the same object for each test", id(ws))
yield ws
# do some cleanup, close connection
del ws
def test_something(websocket_conn):
print("Receiving data with", websocket_conn)
websocket_conn.state = "modified"
print("Still the same", id(websocket_conn))
def test_something_else(websocket_conn):
print("Sending data with", websocket_conn)
print("State preserved:", websocket_conn.state)
print("Still the same", id(websocket_conn))
tests/test_ws.py::test_something This is the same object for each test 4498209520
Receiving data with <tests.test_ws.WS object at 0x10c1d3af0>
Still the same 4498209520
PASSED
tests/test_ws.py::test_something_else Sending data with <tests.test_ws.WS object at 0x10c1d3af0>
State preserved: modified
Still the same 4498209520
PASSED
Related
I'm developing an API (using Sanic) which is a gateway to IB, using ib_insync
This API exposes endpoints to place a new order and getting live positions, but also is in charge to update order statuses in a DB using the events of ib_insync.
My question is - is it possible to have the API connect only once to IB when it goes up, and re-using the same connection for all requests?
I'm currently connecting to IB using connectAsync on every request. And while this is working - the API will not receive events if not currently handling a request.
This is one endpoint code, for reference
#app.post("/order/<symbol>")
async def post_order(request, symbol):
order = jsonpickle.decode(request.body)
with await IB().connectAsync("127.0.0.1", 7496, clientId=100) as ib:
ib.orderStatusEvent += onOrderStatus
ib.errorEvent += onTWSError
ib.newOrderEvent += onNewOrderEvent
contract = await ib.qualifyContractsAsync(contract)
trade = ib.placeOrder(contract[0], order)
return text(trade.order.orderId)
So I wish to not use the with statement, and just use a global ib connection.
When I'm connecting on the module init (using connectAsync), every later call that is async, like qualifyContractsAsync, just hangs. Debugging showed me that it hangs on asyncio.gather which means I'm doing something wrong with the event loops.
I'm not familiar with this particular connection, but yes it should be possible. Presumably the with statement is opening and closing the connection.
Instead, open and close it with a listener.
Docs re: listeners
#app.before_server_start
async def connect(app,_):
app.ctx.foo = await Foobar()
#app.route("/")
async def handler(request):
await request.app.ctx.foo.do_something()
#app.after_server_stop
async def close(app,_):
app.ctx.foo.close()
I have an async http webs service using fastapi. I am running multiple instances of the same service on the server on a different port and I have an nginx server in front so I can utilise them all. I have a particular resource I need to protect that only one client is accessing it.
#app.get("/do_something")
async def do_something():
critical_section_here()
I tried to protect this critical section using a file lock like this:
#app.get("/do_something")
async def do_something():
with FileLock("dosomething.lock"):
critical_section()
This will prevent multiple processes to enter the critical section at the same time. But what I found is that this will actually dead lock. Think about the following event:
client 1 connected to port 8000 and enter the critical section
while client 1 is still using the resource client 2 is routed to the same port 8000 and then it will try to acquire the file lock, it cannot, so it will keep trying and this will block the execution of client 1 and client 1 will never be able to release the filelock and this means not only this process is locked every other server instance will be locked as well.
Is there a way for me to coordinate these processes so that only one of them access the critical section? I thought about adding a timeout to the filelock but I really don't want to reject user, I just want to wait until it's his/her turn to enter the critical section.
You can try something like this:
import fcntl
from contextlib import asynccontextmanager
from fastapi import FastAPI
app = FastAPI()
def acquire_lock():
f = open("/tmp/test.lock", "w")
fcntl.flock(f, fcntl.LOCK_EX)
return f
#asynccontextmanager
async def lock():
loop = asyncio.get_running_loop()
f = await loop.run_in_executor(None, acquire_lock)
try:
yield
finally:
f.close()
#app.get("/test/")
async def test():
async with lock():
print("Enter critical section")
await asyncio.sleep(5)
print("End critical section")
It will basically serialize all your requests.
You could use aioredlock.
It allows you to create distributed locks between workers (processes). For more information about its usage, follow the link above.
The redlock algorithm is a distributed lock implementation for Redis. There are many implementations of it in several languages. In this case, this is the asyncio compatible implementation for python 3.5+.
Example of usage:
# Or you can use the lock as async context manager:
try:
async with await lock_manager.lock("resource_name") as lock:
assert lock.valid is True
# Do your stuff having the lock
await lock.extend() # alias for lock_manager.extend(lock)
# Do more stuff having the lock
assert lock.valid is False # lock will be released by context manager
except LockError:
print('Lock not acquired')
raise
My application will be sending hundreds, if not thousands, of messages over redis every second, so we are going to open the connection when the app launches and use that connection for every transaction. I am having trouble keeping that connection open.
Here is some of my redis handler class:
class RedisSingleDatabase(AsyncGetSetDatabase):
async def redis(self):
if not self._redis:
self._redis = await aioredis.create_redis(('redis', REDIS_PORT))
return self._redis
def __init__(self):
self._redis = None
async def get(self, key):
r = await self.redis()
# r = await aioredis.create_redis(('redis', REDIS_PORT))
print(f'CONNECTION IS {"CLOSED" if r.closed else "OPEN"}!')
data = await r.get(key, encoding='utf-8')
if data is not None:
return json.loads(data)
This does not work. By the time we call r.get, the connection is closed (ConnectionClosedError).
If I uncomment the second line of the get method, and connect to the database locally in the method, it works. But then we're no longer using the same connection.
I've considered trying all this using dependency injection as well. This is a flask app, and in my create_app method I would connect to the database and construct my RedisSingleDatabase with that connection. Besides the question of whether my current problem would be the same, I can't seem to get this approach to work since we need to await the connection which I can't do in my root-level create_app which can't be async! (unless I'm missing something).
I've tried asyncio.run all over the place and it either doesn't fix the problem or raises its own error that it can't be called from a running event loop.
Please help!!!
I'm trying to create a Python-based CLI that communicates with a web service via websockets. One issue that I'm encountering is that requests made by the CLI to the web service intermittently fail to get processed. Looking at the logs from the web service, I can see that the problem is caused by the fact that frequently these requests are being made at the same time (or even after) the socket has closed:
2016-09-13 13:28:10,930 [22 ] INFO DeviceBridge - Device bridge has opened
2016-09-13 13:28:11,936 [21 ] DEBUG DeviceBridge - Device bridge has received message
2016-09-13 13:28:11,937 [21 ] DEBUG DeviceBridge - Device bridge has received valid message
2016-09-13 13:28:11,937 [21 ] WARN DeviceBridge - Unable to process request: {"value": false, "path": "testcube.pwms[0].enabled", "op": "replace"}
2016-09-13 13:28:11,936 [5 ] DEBUG DeviceBridge - Device bridge has closed
In my CLI I define a class CommunicationService that is responsible for handling all direct communication with the web service. Internally, it uses the websockets package to handle communication, which itself is built on top of asyncio.
CommunicationService contains the following method for sending requests:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
asyncio.ensure_future(self._ws.send(request))
...where ws is a websocket opened earlier in another method:
self._ws = await websockets.connect(websocket_address)
What I want is to be able to await the future returned by asyncio.ensure_future and, if necessary, sleep for a short while after in order to give the web service time to process the request before the websocket is closed.
However, since send_request is a synchronous method, it can't simply await these futures. Making it asynchronous would be pointless as there would be nothing to await the coroutine object it returned. I also can't use loop.run_until_complete as the loop is already running by the time it is invoked.
I found someone describing a problem very similar to the one I have at mail.python.org. The solution that was posted in that thread was to make the function return the coroutine object in the case the loop was already running:
def aio_map(coro, iterable, loop=None):
if loop is None:
loop = asyncio.get_event_loop()
coroutines = map(coro, iterable)
coros = asyncio.gather(*coroutines, return_exceptions=True, loop=loop)
if loop.is_running():
return coros
else:
return loop.run_until_complete(coros)
This is not possible for me, as I'm working with PyRx (Python implementation of the reactive framework) and send_request is only called as a subscriber of an Rx observable, which means the return value gets discarded and is not available to my code:
class AnonymousObserver(ObserverBase):
...
def _on_next_core(self, value):
self._next(value)
On a side note, I'm not sure if this is some sort of problem with asyncio that's commonly come across or whether I'm just not getting it, but I'm finding it pretty frustrating to use. In C# (for instance), all I would need to do is probably something like the following:
void SendRequest(string request)
{
this.ws.Send(request).Wait();
// Task.Delay(500).Wait(); // Uncomment If necessary
}
Meanwhile, asyncio's version of "wait" unhelpfully just returns another coroutine that I'm forced to discard.
Update
I've found a way around this issue that seems to work. I have an asynchronous callback that gets executed after the command has executed and before the CLI terminates, so I just changed it from this...
async def after_command():
await comms.stop()
...to this:
async def after_command():
await asyncio.sleep(0.25) # Allow time for communication
await comms.stop()
I'd still be happy to receive any answers to this problem for future reference, though. I might not be able to rely on workarounds like this in other situations, and I still think it would be better practice to have the delay executed inside send_request so that clients of CommunicationService do not have to concern themselves with timing issues.
In regards to Vincent's question:
Does your loop run in a different thread, or is send_request called by some callback?
Everything runs in the same thread - it's called by a callback. What happens is that I define all my commands to use asynchronous callbacks, and when executed some of them will try to send a request to the web service. Since they're asynchronous, they don't do this until they're executed via a call to loop.run_until_complete at the top level of the CLI - which means the loop is running by the time they're mid-way through execution and making this request (via an indirect call to send_request).
Update 2
Here's a solution based on Vincent's proposal of adding a "done" callback.
A new boolean field _busy is added to CommunicationService to represent if comms activity is occurring or not.
CommunicationService.send_request is modified to set _busy true before sending the request, and then provides a callback to _ws.send to reset _busy once done:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
def callback(_):
self._busy = False
self._busy = True
asyncio.ensure_future(self._ws.send(request)).add_done_callback(callback)
CommunicationService.stop is now implemented to wait for this flag to be set false before progressing:
async def stop(self) -> None:
"""
Terminate communications with TestCube Web Service.
"""
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
while self._busy:
await asyncio.sleep(0.1)
# Allow short delay after final request is processed.
await asyncio.sleep(0.1)
self._listen_task.cancel()
await asyncio.wait([self._listen_task, self._ws.close()])
self._listen_task = None
self._ws = None
logger.info('Terminated connection to TestCube Web Service')
This seems to work too, and at least this way all communication timing logic is encapsulated within the CommunicationService class as it should be.
Update 3
Nicer solution based on Vincent's proposal.
Instead of self._busy we have self._send_request_tasks = [].
New send_request implementation:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.append(task)
New stop implementation:
async def stop(self) -> None:
if self._listen_task is None or self._ws is None:
return
# Wait for comms activity to stop.
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
...
You could use a set of tasks:
self._send_request_tasks = set()
Schedule the tasks using ensure_future and clean up using add_done_callback:
def send_request(self, request: str) -> None:
task = asyncio.ensure_future(self._ws.send(request))
self._send_request_tasks.add(task)
task.add_done_callback(self._send_request_tasks.remove)
And wait for the set of tasks to complete:
async def stop(self):
if self._send_request_tasks:
await asyncio.wait(self._send_request_tasks)
Given that you're not inside an asynchronous function you can use the yield from keyword to effectively implement await yourself. The following code will block until the future returns:
def send_request(self, request: str) -> None:
logger.debug('Sending request: {}'.format(request))
future = asyncio.ensure_future(self._ws.send(request))
yield from future.__await__()
I am trying to design a test suite for my tornado web socket server.
I am using a client to do this - connect to a server through a websocket, send a request and expect a certain response.
I am using python's unittest to run my tests, so I cannot (and do not want to really) enforce the sequence in which the tests are running.
This is how my base test class (after which all test cases inherit) is organized. (The logging and certain parts, irrelevant here are stripped).
class BaseTest(tornado.testing.AsyncTestCase):
ws_delay = .05
#classmethod
def setUpClass(cls):
cls.setup_connection()
return
#classmethod
def setup_connection(cls):
# start websocket threads
t1 = threading.Thread(target=cls.start_web_socket_handler)
t1.start()
# websocket opening delay
time.sleep(cls.ws_delay)
# this method initiates the tornado.ioloop, sets up the connection
cls.websocket.connect('localhost', 3333)
return
#classmethod
def start_web_socket_handler(cls):
# starts tornado.websocket.WebSocketHandler
cls.websocket = WebSocketHandler()
cls.websocket.start()
The scheme I came up with is to have this base class which inits the connection once for all tests (although this does not have to be the case - I am happy to set up and tear down the connection for each test case if it solves my problems). What is important that I do not want to have multiple connections open at the same time.
The simple test case looks like that.
class ATest(BaseTest):
#classmethod
def setUpClass(cls):
super(ATest, cls).setUpClass()
#classmethod
def tearDownClass(cls):
super(ATest, cls).tearDownClass()
def test_a(self):
saved_stdout = sys.stdout
try:
out = StringIO()
sys.stdout = out
message_sent = self.websocket.write_message(
str({'opcode': 'a_message'}})
)
output = out.getvalue().strip()
# the code below is useless
while (output is None or not len(output)):
self.log.debug("%s waiting for response." % str(inspect.stack()[0][3]))
output = out.getvalue().strip()
self.assertIn(
'a_response', output,
"Server didn't send source not a_response. Instead sent: %s" % output
)
finally:
sys.stdout = saved_stdout
It works fine most of the time, yet it is not fully deterministic (and therefore reliable). Since the websocket communication is performed async, and the unittest executes test synchronously, the server responses (which are received on the same websocket) get mixed up with the requests and the tests fail occasionally.
I know it should be callback based, but this won't solve the response mixing issue. Unless, all the tests are artifically sequenced in a series of callbacks (as in start test_2 inside a test_1_callback).
Tornado offers a testing library to help with synchronous testing, but I cannot seem to get it working with websockets (the tornado.ioloop has it's own thread which you cannot block).
I cannot find a python websocket synchronous client library which would work with tornado server and be RFC 6455 compliant. Pypi's websocket-client fails to meet the second demand.
My questions are:
Is there a reliable python synchronous websocket client library that meets the demands described above?
If not, what is the best way to organize a test suite like this (the tests cannot really be run in parallel)?
As far as I understand, since we're working with one websocket, the IOStreams for test cases cannot be separated, and therefore there is no way of determining to which request the response is coming (I have multiple tests for requests of the same type with different parameters). Am I wrong in this ?
Have you looked at the websocket unit tests included with tornado? They show you how you can do this:
from tornado.testing import AsyncHTTPTestCase, gen_test
from tornado.websocket import WebSocketHandler, websocket_connect
class MyHandler(WebSocketHandler):
""" This is the server code you're testing."""
def on_message(self, message):
# Put whatever response you want in here.
self.write_message("a_response\n")
class WebSocketTest(AsyncHTTPTestCase):
def get_app(self):
return Application([
('/', MyHandler, dict(close_future=self.close_future)),
])
#gen_test
def test_a(self):
ws = yield websocket_connect(
'ws://localhost:%d/' % self.get_http_port(),
io_loop=self.io_loop)
ws.write_message(str({'opcode': 'a_message'}}))
response = yield ws.read_message()
self.assertIn(
'a_response', response,
"Server didn't send source not a_response. Instead sent: %s" % response
)v
The gen_test decorator allows you to run asynchronous testcases as coroutines, which, when run inside tornado's ioloop, effectively makes them behave synchronously for testing purposes.