My application will be sending hundreds, if not thousands, of messages over redis every second, so we are going to open the connection when the app launches and use that connection for every transaction. I am having trouble keeping that connection open.
Here is some of my redis handler class:
class RedisSingleDatabase(AsyncGetSetDatabase):
async def redis(self):
if not self._redis:
self._redis = await aioredis.create_redis(('redis', REDIS_PORT))
return self._redis
def __init__(self):
self._redis = None
async def get(self, key):
r = await self.redis()
# r = await aioredis.create_redis(('redis', REDIS_PORT))
print(f'CONNECTION IS {"CLOSED" if r.closed else "OPEN"}!')
data = await r.get(key, encoding='utf-8')
if data is not None:
return json.loads(data)
This does not work. By the time we call r.get, the connection is closed (ConnectionClosedError).
If I uncomment the second line of the get method, and connect to the database locally in the method, it works. But then we're no longer using the same connection.
I've considered trying all this using dependency injection as well. This is a flask app, and in my create_app method I would connect to the database and construct my RedisSingleDatabase with that connection. Besides the question of whether my current problem would be the same, I can't seem to get this approach to work since we need to await the connection which I can't do in my root-level create_app which can't be async! (unless I'm missing something).
I've tried asyncio.run all over the place and it either doesn't fix the problem or raises its own error that it can't be called from a running event loop.
Please help!!!
Related
I'm designing an automated test suite to simulate a client which logs in to the backend via api rest and then opens up a websocket communication. I have to test different features over REST and Websocket.
Currently I'm performing each websocket test like this:
-The client logs in
-The ws communication starts
-It sends a ws message and awaits a response
-It checks the respose schema and asserts the result
-The ws connection is closed
-The test end
My problem:
When I run multiple websocket tests as I described above, they open and close the websocket communication several times and my test client ends up being "Blacklisted" because of the irregular behaveour, and from there is not able to reconnect via ws for a considerable time period.
My question:
How do I open a websocket connection and keep it open and active across all my tests?
I'm using pytest with "requests" module for api calls and "websockets" module for ws communication.
I've tried to split the python process into two sub-processes with "multiprosessig" module but I'm quite lost here because I'm not able yet to commuicate the pytest process with the websocket process to send the messages and retrive the responses
My websocket connection logic is the following code:
async def websocket_connection(device: Device, cmd_list: list[WebsocketMsg] = None):
init_cmd = WsInitCommand(device)
cmd_list.insert(0, init_cmd)
async def wait_for_correct_response(ws_connection, msj_id: str) -> dict:
response_received = False
ws_response: dict = {}
while not response_received:
ws_response = json.loads(await ws_connection.recv())
if 'id' in ws_response and ws_response['id'] == msj_id:
response_received = True
return ws_response
async with websockets.connect(init_cmd.url, subprotocols=init_cmd.sub_protocols) as websocket:
for cmd in cmd_list:
await websocket.send(str(cmd.message))
msg_response: dict = await wait_for_correct_response(websocket, cmd.msg_id)
return True
Use pytest fixture on a session scope to share singleton websocket connection across tests https://docs.pytest.org/en/stable/reference/fixtures.html#higher-scoped-fixtures-are-executed-first.
Omit multiprocesses and splitting into two processes as it would bring additional complexity and will be tricky to implement.
UPD: answering the comment with this example
import pytest
class WS:
pass
#pytest.fixture(scope="session")
def websocket_conn():
ws = WS()
# establish connection
print("This is the same object for each test", id(ws))
yield ws
# do some cleanup, close connection
del ws
def test_something(websocket_conn):
print("Receiving data with", websocket_conn)
websocket_conn.state = "modified"
print("Still the same", id(websocket_conn))
def test_something_else(websocket_conn):
print("Sending data with", websocket_conn)
print("State preserved:", websocket_conn.state)
print("Still the same", id(websocket_conn))
tests/test_ws.py::test_something This is the same object for each test 4498209520
Receiving data with <tests.test_ws.WS object at 0x10c1d3af0>
Still the same 4498209520
PASSED
tests/test_ws.py::test_something_else Sending data with <tests.test_ws.WS object at 0x10c1d3af0>
State preserved: modified
Still the same 4498209520
PASSED
I'm developing an API (using Sanic) which is a gateway to IB, using ib_insync
This API exposes endpoints to place a new order and getting live positions, but also is in charge to update order statuses in a DB using the events of ib_insync.
My question is - is it possible to have the API connect only once to IB when it goes up, and re-using the same connection for all requests?
I'm currently connecting to IB using connectAsync on every request. And while this is working - the API will not receive events if not currently handling a request.
This is one endpoint code, for reference
#app.post("/order/<symbol>")
async def post_order(request, symbol):
order = jsonpickle.decode(request.body)
with await IB().connectAsync("127.0.0.1", 7496, clientId=100) as ib:
ib.orderStatusEvent += onOrderStatus
ib.errorEvent += onTWSError
ib.newOrderEvent += onNewOrderEvent
contract = await ib.qualifyContractsAsync(contract)
trade = ib.placeOrder(contract[0], order)
return text(trade.order.orderId)
So I wish to not use the with statement, and just use a global ib connection.
When I'm connecting on the module init (using connectAsync), every later call that is async, like qualifyContractsAsync, just hangs. Debugging showed me that it hangs on asyncio.gather which means I'm doing something wrong with the event loops.
I'm not familiar with this particular connection, but yes it should be possible. Presumably the with statement is opening and closing the connection.
Instead, open and close it with a listener.
Docs re: listeners
#app.before_server_start
async def connect(app,_):
app.ctx.foo = await Foobar()
#app.route("/")
async def handler(request):
await request.app.ctx.foo.do_something()
#app.after_server_stop
async def close(app,_):
app.ctx.foo.close()
Hi all I have the following code but for some reason I keep getting the following error but it seems to work on a colleagues pc. We can't seem to figure out why this won't work on mine.
We have also double checked that we're importing the same socketio using dir()
I've tried specifying the namespace both on sio.connect and in the sio.emit but still no luck!
socketio.exceptions.BadNamespaceError: / is not a connected namespace.
bearerToken = 'REDACT'
core = 'REDACT'
output = 'REDACT'
import socketio
import json
def getListeners(token, coreUrl, outputId):
sio = socketio.Client(reconnection_attempts=5, request_timeout=5)
sio.connect(url=coreUrl, transports='websocket')
#sio.on('mwedge:batch:stats')
def batchStats(data):
if (outputId in data['outputStats']):
listeners = data['outputStats'][outputId][16]
print("Number of listeners ", len(listeners))
ips = []
for listener in listeners:
ips.append(listener[1])
print("Ips", ips)
def authCallback(data):
print(json.dumps(data))
sio.emit(event='auth',
data={
'token': token
},
callback=authCallback)
getListeners(bearerToken, core, output)
The Socket.IO connection involves a number of exchanges between the client and the server. The connect() function initiates this process, but this continues in the background. The connection ends when the handler for your connect event is invoked. At this point you can emit.
The problem with your code is that you are not waiting until the connection handshakes are completed, so your emit() call happens before there is a connection established. The solution is to add a connect event handler, and move your emit() call there.
As an additional note, I suggest you set up your event handlers before you call the connect() function.
I have a python app written in the Tornado Asynchronous framework. When an HTTP request comes in, this method gets called:
#classmethod
def my_method(cls, my_arg1):
# Do some Database Transaction #1
x = get_val_from_db_table1(id=1, 'x')
y = get_val_from_db_table2(id=7, 'y')
x += x + (2 * y)
# Do some Database Transaction #2
set_val_in_db_table1(id=1, 'x', x)
return True
The three database operations are interrelated. And this is a concurrent application so multiple such HTTP calls can be happening concurrently and hitting the same DB.
For data-integrity purposes, its important that the three database operations in this method are all called without another processes reading or writing to those database rows in between.
How can I make sure this method has database atomicity? Does Tornado have a decorator for this?
Synchronous database access
You haven't stated how you access your database. If, which is likely, you have synchronous DB access in get_val_from_db_table1 and friends (e.g. with pymysql) and my_method is blocking (doesn't return control to IO loop) then you block your server (which has implications on performance and responsiveness of your server) but effectively serialise your clients and only one can execute my_method at a time. So in terms of data consistency you don't need to do anything, but generally it's a bad design. You can solve both with #xyres's solution in short term (at cost of keeping in mind thread-safely concerns because most of Tornado's functionality isn't thread-safe).
Asynchronous database access
If you have asynchronous DB access in get_val_from_db_table1 and friends (e.g. with tornado-mysql) then you can use tornado.locks.Lock. Here's an example:
from tornado import web, gen, locks, ioloop
_lock = locks.Lock()
def synchronised(coro):
async def wrapper(*args, **kwargs):
async with _lock:
return await coro(*args, **kwargs)
return wrapper
class MainHandler(web.RequestHandler):
async def get(self):
result = await self.my_method('foo')
self.write(result)
#classmethod
#synchronised
async def my_method(cls, arg):
# db access
await gen.sleep(0.5)
return 'data set for {}'.format(arg)
if __name__ == '__main__':
app = web.Application([('/', MainHandler)])
app.listen(8080)
ioloop.IOLoop.current().start()
Note that the above is said about normal single-process Tornado application. If you use tornado.process.fork_processes, then you can only go with multiprocessing.Lock.
Since you want to run those three db operations one right after the other, the function my_method must be non-asynchronous.
But this would also mean that my_method will block the server. You definitely don't want that. One way that I can think of is to run this function in another thread. This won't block the server and will keep accepting new requests while the operations are running. And since, it's going to be non-async, db atomicity is guaranteed.
Here's the relevant code to get you started:
import concurrent.futures
executor = concurrent.futures.ThreadPoolExecutor(max_workers=1)
# Don't set `max_workers` more than 1, because then multiple
# threads will be able to perform db operations
class MyHandler(...):
#gen.coroutine
def get(self):
yield executor.submit(MyHandler.my_method, my_arg1)
# above, `yield` is used to wait for
# db operations to finish
# if you don't want to wait and return
# a response immediately remove the
# `yield` keyword
self.write('Done')
#classmethod
def my_method(cls, my_arg1):
# do db stuff ...
return True
I have an API I have written in flask. It uses sqlalchemy to deal with a MySQL database. I don't use flask-sqlalchemy, because I don't like how the module forces you into a certain pattern for declaring the model.
I'm having a problem in which my database connections are not closing. The object representing the connection is going out of scope, so I assume it is being garbage collected. I also explicitly call close() on the session. Despite this, the connections stay open long after the API call has returned its response.
sqlsession.py: Here is the wrapper I am using for the session.
class SqlSession:
def __init__(self, conn=Constants.Sql):
self.db = SqlSession.createEngine(conn)
Session = sessionmaker(bind=self.db)
self.session = Session()
#staticmethod
def createEngine(conn):
return create_engine(conn.URI.format(user=conn.USER, password=conn.PASS, host=conn.HOST, port=conn.PORT, database=conn.DATABASE, poolclass=NullPool))
def close(self):
self.session.close()
flaskroutes.py: Here is an example of the flask app instantiating and using the wrapper object. Note that it instantiates it in the beginning within the scope of the api call, then closes the session at the end, and presumably is garbage collected after the response is returned.
def commands(self, deviceId):
sqlSession = SqlSession(self.sessionType) <---
commandsQueued = getCommands()
jsonCommands = []
for command in commandsQueued:
jsonCommand = command.returnJsonObject()
jsonCommands.append(jsonCommand)
sqlSession.session.delete(command)
sqlSession.session.commit()
resp = jsonify({'commands': jsonCommands})
sqlSession.close() <---
resp.status_code = 200
return resp
I would expect the connections to be cleared as soon as the HTTP response is made, but instead, the connections end up with the "SLEEP" state (when viewed in the MySQL command line interface 'show processlist').
I ended up using the advice from this SO post:
How to close sqlalchemy connection in MySQL
I strongly recommend reading that post to anyone having this problem. Basically, I added a dispose() call to the close method. Doing so causes the entire connection to be destroyed, while closing simply returns connections to an available pool (but leave them open).
def close(self):
self.session.close()
self.db.dispose()
This whole this was a bit confusing to me, but at least now I understand more about the connection pool.