Good day!
I am writing a telegram bot and using aiogram library.
Everything seems to be working fine when I run my code.
However, if I leave the bot running for a while, after some time it throws a timeout error. Can't understand what seems to be the issue - my PC falling asleep or something else?
I run an infinite loop like this
from aiogram import Bot, Dispatcher, types
from aiogram.contrib.fsm_storage.memory import MemoryStorage
import asyncio
import logging
from data import config
from utils.db_api.postgresql import Database
loop = asyncio.get_event_loop()
logging.basicConfig(format=u'%(filename)s [LINE:%(lineno)d] #%(levelname)-8s [%(asctime)s] %(message)s',
level=logging.INFO,
# level=logging.DEBUG,
)
bot = Bot(token=config.BOT_TOKEN, parse_mode=types.ParseMode.HTML)
storage = MemoryStorage()
dp = Dispatcher(bot, storage=storage)
db = Database()
if __name__ == '__main__':
executor.start_polling(dp, on_startup=on_startup)
the error is this
Traceback (most recent call last):
File "/Library/Python/3.8/site-packages/aiogram/dispatcher/dispatcher.py", line 381, in start_polling
updates = await self.bot.get_updates(
File "/Library/Python/3.8/site-packages/aiogram/bot/bot.py", line 110, in get_updates
result = await self.request(api.Methods.GET_UPDATES, payload)
File "/Library/Python/3.8/site-packages/aiogram/bot/base.py", line 231, in request
return await api.make_request(await self.get_session(), self.server, self.__token, method, data, files,
File "/Library/Python/3.8/site-packages/aiogram/bot/api.py", line 139, in make_request
async with session.post(url, data=req, **kwargs) as response:
File "/Library/Python/3.8/site-packages/aiohttp/client.py", line 1138, in __aenter__
self._resp = await self._coro
File "/Library/Python/3.8/site-packages/aiohttp/client.py", line 559, in _request
await resp.start(conn)
File "/Library/Python/3.8/site-packages/aiohttp/client_reqrep.py", line 913, in start
self._continue = None
File "/Library/Python/3.8/site-packages/aiohttp/helpers.py", line 721, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
Related
i have a bot which parses some links given by user. When clients want to parse realy huge amount of links, bot parses them, creates csv file from those links, sends it to user(user can download and view this file) and then raise TimeoutError
Cause exception while getting updates.
Traceback (most recent call last):
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiogram/dispatcher/dispatcher.py", line 381, in start_polling
updates = await self.bot.get_updates(
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiogram/bot/bot.py", line 110, in get_updates
result = await self.request(api.Methods.GET_UPDATES, payload)
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiogram/bot/base.py", line 231, in request
return await api.make_request(await self.get_session(), self.server, self.__token, method, data, files,
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiogram/bot/api.py", line 139, in make_request
async with session.post(url, data=req, **kwargs) as response:
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiohttp/client.py", line 1138, in __aenter__
self._resp = await self._coro
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiohttp/client.py", line 559, in _request
await resp.start(conn)
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 913, in start
self._continue = None
File "/Users/alex26/miniforge3/envs/rq/lib/python3.8/site-packages/aiohttp/helpers.py", line 721, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
Some example how my bot looks like(that's not real code, that's EXAMPLE):
bot = Bot(token=API_TOKEN)
dp = Dispatcher(bot)
def parse_1000_links():
#it takes about 10 mins and therefore that's the same like sleep(600)
sleep(600)
#dp.message_handler()
async def msg_handler(message: types.Message):
if msg == 'parse 1000 links':
res = parse_1000_links()
create_csv_from_res(res)
file_open_obj = open('data.csv', 'rb')
await bot.send_document(message.from_user.id, file_open_obj)
.....
if __name__ == '__main__':
executor.start_polling(dp, skip_updates=True)
User can even get from bot this final file, but after sending this file to user, bot raises this error. That's strange. If user get message, it means that everything is fine(That's my opinion)
How to fix my issue?
Thanks
You should avoid using blocking operations, cause they freeze ALL event loop.
If you can't use async version of your dependency, use executor:
https://docs.python.org/3/library/asyncio-eventloop.html#executing-code-in-thread-or-process-pools
My code is is similar to this example:
import discord
import tweepy
import asyncio
class Client(discord.Client):
async def on_ready(self):
print("ready")
global GUILD
GUILD = discord.utils.get(self.guilds, name = "Any Guild")
class TweepyStream(tweepy.Stream):
def on_connect(self):
print("connceted")
def on_status(self, status):
print(status.text)
channel = discord.utils.get(GUILD.channels, name = "twitter-posts")
asyncio.run(channel.send(status.text))
#From here the Discord message should be send
auth = tweepy.OAuthHandler(keys)
auth.set_access_token(tokens)
global api
api = tweepy.API(auth)
follow_list = []
follow = int(api.get_user(screen_name = "Any User").id)
print(follow)
follow_list.append(follow)
print(str(follow_list))
stream = TweepyStream(tokens and keys)
stream.filter(follow = follow_list, threaded=True) #track = track,
client = Client()
client.run(token)
I try to receive Twitter posts and send them into a Discord channel, but it doesn't work. How can I do this (maybe without asyncio)?
Or is there a way to work with a Python Twitter API, which works with async functions?
My error:
Stream encountered an exception
Traceback (most recent call last):
File "C:\Python39\lib\site-packages\tweepy\streaming.py", line 133, in _connect
self.on_data(line)
File "C:\Python39\lib\site-packages\tweepy\streaming.py", line 387, in on_data
return self.on_status(status)
File "c:\Users\morit\Desktop\FLL 2021 Bot\DiscordBot\DiscordBotV0.7\example.py", line 18, in on_status
asyncio.run(channel.send(status.text))
File "C:\Python39\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Python39\lib\asyncio\base_events.py", line 642, in run_until_complete
return future.result()
File "C:\Python39\lib\site-packages\discord\abc.py", line 1065, in send
data = await state.http.send_message(channel.id, content, tts=tts, embed=embed,
File "C:\Python39\lib\site-packages\discord\http.py", line 192, in request
async with self.__session.request(method, url, **kwargs) as r:
File "C:\Python39\lib\site-packages\aiohttp\client.py", line 1117, in __aenter__
self._resp = await self._coro
File "C:\Python39\lib\site-packages\aiohttp\client.py", line 448, in _request
with timer:
File "C:\Python39\lib\site-packages\aiohttp\helpers.py", line 635, in __enter__
raise RuntimeError(
RuntimeError: Timeout context manager should be used inside a task
asyncio.run creates a new event loop. You need to use the existing event loop that the discord.Client uses. You can retrieve this with Client.loop, asyncio.get_running_loop, or asyncio.get_event_loop.
You should probably also use asyncio.run_coroutine_threadsafe.
If you're using the current latest development version of Tweepy on the master branch, set to be released as v4.0, which it seems like you are, then you can also look into using AsyncStream
I have a FastAPI endpoint where it need to download some files from HDFS to the local server.
I'm trying to use asyncio to run the function that will download the files in a separate process.
I'm using FastAPI Depends to create a HDFS client and inject the object in the endpoint execution.
from fastapi import Depends, FastAPI, Request, Response, status
from hdfs import InsecureClient
import asyncio
from concurrent.futures.process import ProcessPoolExecutor
app = FastAPI()
HDFS_URLS = ['http://hdfs-srv.local:50070']
async def run_in_process(fn, *args):
loop = asyncio.get_event_loop()
return await loop.run_in_executor(app.state.executor, fn, *args) # wait and return result
def connectHDFS():
client = InsecureClient(url)
yield client
def fr(id, img, client):
# my code here
client.download(id_identifica_foto_dir_hdfs, id_identifica_foto_dir_local, True, n_threads=2)
# my code here
return jsonReturn
#app.post("/")
async def main(request: Request, hdfsclient: InsecureClient = Depends(connectHDFS)):
# Decode the received message
data = await request.json()
message = base64.b64decode(data['data']).decode('utf-8').replace("'", '"')
message = json.loads(message)
res = await run_in_process(fr, message['id'], message['img'], hdfsclient)
return {
"message": res
}
#app.on_event("startup")
async def on_startup():
app.state.executor = ProcessPoolExecutor()
#app.on_event("shutdown")
async def on_shutdown():
app.state.executor.shutdown()
But I'm not able to pass ahead the hdfsclient object:
res = await run_in_process(fr, message['id'], message['img'], hdfsclient)
I'm getting the following error:
Traceback (most recent call last):
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__
return await self.app(scope, receive, send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/fastapi/applications.py", line 199, in __call__
await super().__call__(scope, receive, send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/applications.py", line 111, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/middleware/errors.py", line 181, in __call__
raise exc from None
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/middleware/errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/exceptions.py", line 82, in __call__
raise exc from None
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/routing.py", line 566, in __call__
await route.handle(scope, receive, send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/routing.py", line 227, in handle
await self.app(scope, receive, send)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/starlette/routing.py", line 41, in app
response = await func(request)
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/fastapi/routing.py", line 202, in app
dependant=dependant, values=values, is_coroutine=is_coroutine
File "/home/kleyson/.virtualenvs/reconhecimentofacial/lib/python3.7/site-packages/fastapi/routing.py", line 148, in run_endpoint_function
return await dependant.call(**values)
File "./asgi.py", line 86, in main
res = await run_in_process(fr, message['id'], message['img'], hdfsclient)
File "./asgi.py", line 22, in run_in_process
return await loop.run_in_executor(app.state.executor, fn, *args) # wait and return result
File "/usr/lib/python3.7/multiprocessing/queues.py", line 236, in _feed
obj = _ForkingPickler.dumps(obj)
File "/usr/lib/python3.7/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
How can have the hdfsclient available inside the def fr() function without the need to create a new connection on every new request ? I mean, how to create the hdfsclient on the application startup and be able to use it inside the function ?
The entire point of asyncio is to do what you are trying to achieve in the same process.
The typical example is a web crawler, where you open multiple requests, within the same thread/process, and then wait for them to finish. This way, you will get data from multiple urls without having to wait each single request before starting the next one.
The same applies in your case: call your async function that downloads the file, do your stuff and then wait for the file download to complete (if it hasn't completed yet). Sharing data between processes is not trivial, and your function is not working because of that reason.
I suggest you to first understand what async is and how it works, before jumping into doing something that you don't understand.
Some tutorials on asyncio
https://www.datacamp.com/community/tutorials/asyncio-introduction
https://realpython.com/lessons/what-asyncio/
https://docs.python.org/3/library/asyncio.html
I have created a telegram bot. where I am asking the users to send the location.
import aioschedule as schedule
import asyncio
from asyncio import futures
#bot.on(events.CallbackQuery(pattern=r'sendloc'))
async def sendlocation(event):
sender = await event.get_input_sender()
my_user = await bot.get_entity(sender.user_id)
userId = my_user.id
async with bot.conversation(userId) as conv:
await conv.send_message('receiving', buttons=[Button.request_location(text='send Location', resize=10)])
try:
response = await conv.get_response()
except futures.TimeoutError as e:
await conv.cancel_all()
return
conv.send_message('location accepted')
async def methodToSendLocation():
users = pd.read_csv('telegramId.csv')
for user in users['Telegram ID']:
try:
await bot.send_message(user, 'send location', buttons=Button.inline('sendloc'))
except ValueError as e:
print('Error occurred for user: {telId}'.format(telId=user))
def main():
'''start the bot'''
bot.start()
schedule.every().day.at("11:00").do(methodToSendLocation)
loop = asyncio.get_event_loop()
while True:
loop.run_until_complete(schedule.run_pending())
bot.run_until_disconnected()
if __name__ == '__main__':
main()
what does my program do?
It will run every day at 11:00 AM.
It will fetch the telegram Id from the CSV file and send a message to the users to send the location and we process the location.
My Query:
Sometimes there occurs a timeout error in sendlocation method of my program and recently I observed that for some users timeout error occurred within a minute after the user clicks the send Location button.
I am not able to understand why the error occurred in a very short interval.
And sometimes there has been a delay to send the location to the bot after the user clicks the send Location button.
stack trace:
Traceback (most recent call last):
File "app.py", line 118, in sendlocation
response = await conv.get_response()
File "C:\Users\Swarnim\AppData\Local\Programs\Python\Python37-32\lib\asyncio\tasks.py", line 449, in wait_for
raise futures.TimeoutError()
concurrent.futures._base.TimeoutError
Stack (most recent call last):
File "app.py", line 450, in <module>
main()
File "app.py", line 444, in main
loop.run_until_complete(schedule.run_pending())
File "/usr/lib/python3.6/asyncio/base_events.py", line 471, in run_until_complete
self.run_forever()
File "/usr/lib/python3.6/asyncio/base_events.py", line 438, in run_forever
self._run_once()
File "/usr/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once
handle._run()
File "/usr/lib/python3.6/asyncio/events.py", line 145, in _run
self._callback(*self._args)
File "/home/epluser/.venv/automation/lib/python3.6/site-packages/telethon/client/updates.py", line 443, in _dispatch_update
await callback(event)
File "app.py", line 121, in sendlocation
I was learning some async/await in python, and i wanted to try it, but
I'm getting this error while trying to connect to chatango via websocket and i don't know what means.
I'm using python 3.6.1 and aiohttp 2.2.3
This is my code:
import asyncio
import aiohttp
msgs = []
async def main():
async with aiohttp.ClientSession() as session:
async with session.ws_connect("ws://s12.chatango.com:8081/") as ws:
for msg in ws:
msgs.append(msg)
print(msg)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
Full traceback:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client_reqrep.py", line 559, in start
(message, payload) = yield from self._protocol.read()
File "C:\Program Files\Python36\lib\site-packages\aiohttp\streams.py", line 509, in read
yield from self._waiter
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client_proto.py", line 165, in data_received
messages, upgraded, tail = self._parser.feed_data(data)
File "aiohttp\_http_parser.pyx", line 274, in aiohttp._http_parser.HttpParser.feed_data (aiohttp/_http_parser.c:4364)
aiohttp.http_exceptions.BadHttpMessage: 400, message='invalid constant string'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:/Users/joseh/Desktop/a.ws.py", line 42, in <module>
loop.run_until_complete(main())
File "C:\Program Files\Python36\lib\asyncio\base_events.py", line 466, in run_until_complete
return future.result()
File "C:/Users/joseh/Desktop/a.ws.py", line 34, in main
async with session.ws_connect("ws://s12.chatango.com:8081/") as ws:
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client.py", line 603, in __aenter__
self._resp = yield from self._coro
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client.py", line 390, in _ws_connect
proxy_auth=proxy_auth)
File "C:\Program Files\Python36\lib\site-packages\aiohttp\helpers.py", line 91, in __iter__
ret = yield from self._coro
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client.py", line 241, in _request
yield from resp.start(conn, read_until_eof)
File "C:\Program Files\Python36\lib\site-packages\aiohttp\client_reqrep.py", line 564, in start
message=exc.message, headers=exc.headers) from exc
aiohttp.client_exceptions.ClientResponseError: 400, message='invalid constant string'
invalid constant string is a custom response from chatango, they probably want a protocol or some kind of auth header.
If you don't know much about how chatango uses websockets, reverse engineering their system is probably not a good task for learning asyncio and aiohttp.
Better to use something like httparrot which just echos back the message you send it.
Here's your code modified to use httparrot and send 5 messages, get 5 responses, then exit.
import asyncio
import aiohttp
msgs = []
async def main():
async with aiohttp.ClientSession() as session:
async with session.ws_connect('ws://httparrot.herokuapp.com/websocket') as ws:
ws.send_str('hello')
async for msg in ws:
msgs.append(msg)
print(msg)
ws.send_str('hello')
if len(msgs) >= 5:
break
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
print(msgs)