Concurrent execution of two python methods - python

I'm creating a script that is posting a message to both discord and twitter, depending on some input. I have to methods (in separate .py files), post_to_twitter and post_to_discord. What I want to achieve is that both of these try to execute even if the other fails (e.g. if there is some exception with login).
Here is the relevant code snippet for posting to discord:
def post_to_discord(message, channel_name):
client = discord.Client()
#client.event
async def on_ready():
channel = # getting the right channel
await channel.send(message)
await client.close()
client.run(discord_config.token)
and here is the snippet for posting to twitter part (stripped from the try-except blocks):
def post_to_twitter(message):
auth = tweepy.OAuthHandler(twitter_config.api_key, twitter_config.api_key_secret)
auth.set_access_token(twitter_config.access_token, twitter_config.access_token_secret)
api = tweepy.API(auth)
api.update_status(message)
Now, both of these work perfectly fine on their own and when being called synchronously from the same method:
def main(message):
post_discord.post_to_discord(message)
post_tweet.post_to_twitter(message)
However, I just cannot get them to work concurrently (i.e. to try to post to twitter even if discord fails or vice-versa). I've already tried a couple of different approaches with multi-threading and with asyncio.
Among others, I've tried the solution from this question. But got an error No module named 'IPython'. When I omitted the IPython line, changed the methods to async, I got this error: RuntimeError: Cannot enter into task <ClientEventTask state=pending event=on_ready coro=<function post_to_discord.<locals>.on_ready at 0x7f0ee33e9550>> while another task <Task pending name='Task-1' coro=<main() running at post_main.py:31>> is being executed..
To be honest, I'm not even sure if asyncio would be the right approach for my use case, so any insight is much appreciated.
Thank you.

In this case running the two things in completely separate threads (and completely separate event loops) is probably the easiest option at your level of expertise. For example, try this:
import post_to_discord, post_to_twitter
import concurrent.futures
def main(message):
with concurrent.futures.ThreadPoolExecutor() as pool:
fut1 = pool.submit(post_discord.post_to_discord, message)
fut2 = pool.submit(post_tweet.post_to_twitter, message)
# here closing the threadpool will wait for both futures to complete
# make exceptions visible
for fut in (fut1, fut2):
try:
fut.result()
except Exception as e:
print("error: ", e)

Related

How to really make this operation async in python?

I have very recently started checking out asyncio for python. The use case is that, I hit an endpoint /refresh_data and the data is refreshed (from an S3 bucket). But this should be a non blocking operation, other API endpoints should still be able to be serviced.
So in my controller, I have:
def refresh_data():
myservice.refresh_data()
return jsonify(dict(ok=True))
and in my service I have:
async def refresh_data():
try:
s_result = await self.s3_client.fetch()
except (FileNotFound, IOError) as e:
logger.info("problem")
gather = {i.pop("x"):i async for i in summary_result}
# ... some other stuff
and in my client:
async def fetch():
result = pd.read_parquet("bucket", "pyarrow", cols, filts).to_dict(orient="col1")
return result
And when I run it, I see this error:
TypeError: 'async for' requires an object with __aiter__ method, got coroutine
I don't know how to move past this. Adding async definitely makes it return a coroutine type - but, either I have implemented this messily or I haven't fully understood asyncio package in Python. I have been working off of simple examples but I'm not sure what's wrong with what I've done here.

How to restart a coroutine after a websocket stream stops receiving data?

I'm writing an asyncio application to monitor prices of crypto markets and trade/order events, but for an unknown reason some streams stop receiving data after few hours. I'm not familiar with the asyncio package and I would appreciate help in finding a solution.
Basically, the code below establishs websocket connections with a crypto exchange to listen streams of six symbols (ETH/USD, BTC/USD, BNB/USD,...) and trades events from two accounts (user1, user2). The application uses the library ccxtpro. The public method watch_ohlcv get price steams, while private methods watchMyTrades and watchOrders get new orders and trades events at account level.
The problem is that one or several streams are interrupted after few hours, and the object response get empty or None. I would like to detect and restart these streams after they stops working, how can I do that ?
# tasks.py
#app.task(bind=True, name='Start websocket loops')
def start_ws_loops(self):
ws_loops()
# methods.py
def ws_loops():
async def method_loop(client, exid, wallet, method, private, args):
exchange = Exchange.objects.get(exid=exid)
if private:
account = args['account']
else:
symbol = args['symbol']
while True:
try:
if private:
response = await getattr(client, method)()
if method == 'watchMyTrades':
do_stuff(response)
elif method == 'watchOrders':
do_stuff(response)
else:
response = await getattr(client, method)(**args)
if method == 'watch_ohlcv':
do_stuff(response)
# await asyncio.sleep(3)
except Exception as e:
print(str(e))
break
await client.close()
async def clients_loop(loop, dic):
exid = dic['exid']
wallet = dic['wallet']
method = dic['method']
private = dic['private']
args = dic['args']
exchange = Exchange.objects.get(exid=exid)
parameters = {'enableRateLimit': True, 'asyncio_loop': loop, 'newUpdates': True}
if private:
log.info('Initialize private instance')
account = args['account']
client = exchange.get_ccxt_client_pro(parameters, wallet=wallet, account=account)
else:
log.info('Initialize public instance')
client = exchange.get_ccxt_client_pro(parameters, wallet=wallet)
mloop = method_loop(client, exid, wallet, method, private, args)
await gather(mloop)
await client.close()
async def main(loop):
lst = []
private = ['watchMyTrades', 'watchOrders']
public = ['watch_ohlcv']
for exid in ['binance']:
for wallet in ['spot', 'future']:
# Private
for method in private:
for account in ['user1', 'user2']:
lst.append(dict(exid=exid,
wallet=wallet,
method=method,
private=True,
args=dict(account=account)
))
# Public
for method in public:
for symbol in ['ETH/USD', 'BTC/USD', 'BNB/USD']:
lst.append(dict(exid=exid,
wallet=wallet,
method=method,
private=False,
args=dict(symbol=symbol,
timeframe='5m',
limit=1
)
))
loops = [clients_loop(loop, dic) for dic in lst]
await gather(*loops)
loop = asyncio.new_event_loop()
loop.run_until_complete(main(loop))
let me share with you my experience since I am dealing with the same problem.
CCXT is not expected to get stalled streams after some time running it.
Unfortunately practice and theory are different and error 1006 happens quite often. I am using Binance, OKX, Bitmex and BTSE ( BTSE is not supported by CCXT) and my code runs on AWS server so I should not have any connection issue. Binance and OKX are the worst as far as error 1006 is concerned.. Honestly, after researching it on google, I have only understood 1006 is a NetworkError and I know CCXT tries to resubscribe the channel automatically. All other explanations I found online did not convince me. If somebody could give me more info about this error I would appreciate it.
In any case, every time an exception is raised, I put it in an exception_list as a dictionary containing info like time in mls, method, exchange, description ecc. The exception_list is then passed to a handle_exception method. In this case, if the list contains two 1006 exception within X time handle_exception returns we are not on sync with market data and trading must stop. I cancel all my limit order and I emit a beep ( calling human intervention).
As for your second question:
restart these streams after they stops working, how can I do that
remember that you are Running Tasks Concurrently
If return_exceptions is False (default), the first raised exception is
immediately propagated to the task that awaits on gather(). Other
awaitables in the aws sequence won’t be cancelled and will continue to
run.
here you can find info about restarting individual task in a a gather()
In your case, since you are using a single exchange (Binance) and unsubscribe is not implemented in CCXT, you will have to close the connection and restart all the task. You can still use the above example in the link for automating it. In case you are using more then one exchange you can design your code in a way that let you close and restart only the Exchange that failed.
Another option for you would be defining the tasks with more granularity in the main so that every task is related to a single and well defined exchange/user/method/symbol and every task subscribes a single channel. This will result in a more verbose and less elegant code but it will help you catching the exception and eventually restart only a specific coroutine.
I am obviously assuming that after error 1006 the channel status is unsubscribed
final thought:
never leave a robot unattended
Professional market makers with a team of engineers working in London do not go to the pub while their algos ( usually co-located within the exchange ) execute thousands of trades.
I hope this can help you or, at least, get you in the right directions for handling exceptions and restart tasks
You need to use callbacks.
For example:
ws = self.ws = await websockets.connect(END_POINTS, compression=None) # step 1
await self.ws.send(SEND_YOUR_SUBSCRIPTION_MESSAGES) # step 2
while True:
response = await self.ws.recv()
if response:
await handler(response)
In the last like await handler(response) you are sending the response to the handler().
This handler() is the callback, it is the function that actually consumes your data that you receive from the exchange server.
In this handler(), what you can do is you check if the response is your desired data (bid/ask price etc) or it throws an exception like ConnectionClosedError, in which case you restart the websocket by doing STEP 1 and STEP 2 from within your handler.
So basically in the callback method, you need to either process the data
or restart the websocket and pass the handler to it again to receive the responses.
Hope this helps. I could not share the complete code as i need to clean it for sensitive business logic.

Forwarding Events with Python Faust

I am trying to forward messages to internal topics in faust. As suggested by faust in this example:
https://faust.readthedocs.io/en/latest/playbooks/vskafka.html
I have the following code
in_topic = app.topic("in_topic", internal=False, partitions=1)
batching = app.topic("batching", internal=True, partitions=1)
....
#app.agent(in_topic)
async def process(stream):
async for event in stream:
event.forward(batching)
yield
But i always get the following error when runnning my pytest:
AttributeError: 'str' object has no attribute 'forward'
Was this feature removed or do i need to specify the topic differently to get an event, or is this even an issue with pytest ?
You are using the syntax backwards, thats why!
in_topic = app.topic("in_topic", internal=False, partitions=1)
batching = app.topic("batching", internal=True, partitions=1)
....
#app.agent(in_topic)
async def process(stream):
async for event in stream:
batching.send(value=event)
yield
Should actually work.
Edit:
This is also the only they that I could make correct use of pytest with faust.
The sink method is only usable if you delete all but the last sink of the mocked agent, which does not seem intuitive.
If you first import your sink and then add this decorator to your test, everything should work fine:
from my_path import my_sink, my_agent
from unittest.mock import patch, AsyncMock
#patch(__name__ + '.my_sink.send', new_callable=AsyncMock)
def test_my_agent(test_app)
async with my_agent.test_context() as agent:
payload = 'This_works_fine'
agent.put(payload)
my_sink.send.assert_called_with(value=payload)
try the next one:
#app.agent(in_topic, sink=[batching])
async def process(stream):
async for event in stream:
yield event
The answer from #Florian Hall works fine. I also found out that the forward method is only implemented for Events, in my case i received a str. The reason could be the difference between Faust Topics and Channels. Another thing is that using pytest and the Faust test_context() behaves weird, for example it forces you to yield even if you don't have a sink defined.

How to use Tornado.gen.coroutine in TCP Server?

i write a Tcp Server with Tornado.
here is the code:
#! /usr/bin/env python
#coding=utf-8
from tornado.tcpserver import TCPServer
from tornado.ioloop import IOLoop
from tornado.gen import *
class TcpConnection(object):
def __init__(self,stream,address):
self._stream=stream
self._address=address
self._stream.set_close_callback(self.on_close)
self.send_messages()
def send_messages(self):
self.send_message(b'hello \n')
print("next")
self.read_message()
self.send_message(b'world \n')
self.read_message()
def read_message(self):
self._stream.read_until(b'\n',self.handle_message)
def handle_message(self,data):
print(data)
def send_message(self,data):
self._stream.write(data)
def on_close(self):
print("the monitored %d has left",self._address)
class MonitorServer(TCPServer):
def handle_stream(self,stream,address):
print("new connection",address,stream)
conn = TcpConnection(stream,address)
if __name__=='__main__':
print('server start .....')
server=MonitorServer()
server.listen(20000)
IOLoop.instance().start()
And i face some eorror assert self._read_callback is None, "Already reading",i guess the eorror is because multiple commands to read from socket at the same time.and then i change the function send_messages with tornado.gen.coroutine.here is code:
#gen.coroutine
def send_messages(self):
yield self.send_message(b'hello \n')
response1 = yield self.read_message()
print(response1)
yield self.send_message(b'world \n')
print((yield self.read_message()))
but there are some other errors. the code seem to stop after yield self.send_message(b'hello \n'),and the following code seem not to execute.
how should i do about it ? If you're aware of any Tornado tcpserver (not HTTP!) code with tornado.gen.coroutine,please tell me.I would appreciate any links!
send_messages() calls send_message() and read_message() with yield, but these methods are not coroutines, so this will raise an exception.
The reason you're not seeing the exception is that you called send_messages() without yielding it, so the exception has nowhere to go (the garbage collector should eventually notice and print the exception, but that can take a long time). Whenever you call a coroutine, you should either use yield to wait for it to finish, or IOLoop.current().spawn_callback() to run the coroutine in the "background" (this tells Tornado that you do not intend to yield the coroutine, so it will print the exception as soon as it occurs). Also, whenever you override a method you should read the documentation to see whether coroutines are allowed (when you override TCPServer.handle_stream() you can make it a coroutine, but __init__() may not be a coroutine).
Once the exception is getting logged, the next step is to fix it. You can either make send_message() and read_message() coroutines (getting rid of the handle_message() callback in the process), or you can use tornado.gen.Task() to call coroutine-style code from a coroutine. I generally recommend using coroutines everywhere.

amqp queue_delete catch errors in async way

I've just got started using pika(v 0.9.4) with Tornado (through the use of pika.adapters.tornado_connection.TornadoConnection) and I was wondering what's the appropriate way of catching errors when using, say: queue_delete for when the queue you're trying to delete doesn't exist. RabbitMQ raises AMQPError, but I am not sure how this can be handled in an async way.
Anyone has any insights on this ?
Disclaimer: I'm the author of stormed-amqp
I would suggest trying with stormed-amqp
import logging
logging.basicConfig()
from tornado.ioloop import IOLoop
from stormed import Connection
def on_connect():
ch = conn.channel()
ch.queue_declare(queue='hello', durable=False)
ch.queue_declare(queue='hello', durable=True)
def on_error(e):
print "Got Connection error", e.reply_text, e.reply_code
io_loop.stop()
conn = Connection(host='localhost')
conn.on_error = on_error
conn.connect(on_connect)
io_loop = IOLoop.instance()
io_loop.start()
Try to avoid the error. If you declare a connection to the queue, and it doesn't exist, it will be created. Then immediately delete it.
Or, if you will use that queue again in the next week or so, i.e. it is not single-use, then just leave it around and handle deletion as a system admin activity that cleans up long term idle queues.
Or just declare your queues with the auto-delete attribute and they will go away when you disconnect.

Categories